Mar 6 01:44:00.109696 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 5 23:31:42 -00 2026 Mar 6 01:44:00.109717 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:44:00.109729 kernel: BIOS-provided physical RAM map: Mar 6 01:44:00.109736 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 6 01:44:00.109741 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 6 01:44:00.109747 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 6 01:44:00.109753 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 6 01:44:00.109759 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 6 01:44:00.109765 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 6 01:44:00.109774 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 6 01:44:00.109780 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 6 01:44:00.109785 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 6 01:44:00.109813 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 6 01:44:00.109819 kernel: NX (Execute Disable) protection: active Mar 6 01:44:00.109826 kernel: APIC: Static calls initialized Mar 6 01:44:00.109874 kernel: SMBIOS 2.8 present. Mar 6 01:44:00.109881 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 6 01:44:00.109887 kernel: Hypervisor detected: KVM Mar 6 01:44:00.109893 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 6 01:44:00.109899 kernel: kvm-clock: using sched offset of 9760404676 cycles Mar 6 01:44:00.109949 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 6 01:44:00.109958 kernel: tsc: Detected 2445.424 MHz processor Mar 6 01:44:00.109964 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 6 01:44:00.109971 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 6 01:44:00.109982 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 6 01:44:00.109988 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 6 01:44:00.109994 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 6 01:44:00.110001 kernel: Using GB pages for direct mapping Mar 6 01:44:00.110007 kernel: ACPI: Early table checksum verification disabled Mar 6 01:44:00.110013 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 6 01:44:00.110019 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:44:00.110026 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:44:00.110032 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:44:00.110041 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 6 01:44:00.110048 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:44:00.110054 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:44:00.110060 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:44:00.110066 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 6 01:44:00.110073 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 6 01:44:00.110079 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 6 01:44:00.110106 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 6 01:44:00.110116 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 6 01:44:00.110123 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 6 01:44:00.110129 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 6 01:44:00.110136 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 6 01:44:00.110142 kernel: No NUMA configuration found Mar 6 01:44:00.110149 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 6 01:44:00.110159 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 6 01:44:00.110165 kernel: Zone ranges: Mar 6 01:44:00.110172 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 6 01:44:00.110178 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 6 01:44:00.110185 kernel: Normal empty Mar 6 01:44:00.110191 kernel: Movable zone start for each node Mar 6 01:44:00.110198 kernel: Early memory node ranges Mar 6 01:44:00.110204 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 6 01:44:00.110211 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 6 01:44:00.110217 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 6 01:44:00.110227 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 6 01:44:00.110249 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 6 01:44:00.110256 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 6 01:44:00.110262 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 6 01:44:00.110269 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 6 01:44:00.110275 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 6 01:44:00.110282 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 6 01:44:00.110288 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 6 01:44:00.110295 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 6 01:44:00.110305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 6 01:44:00.110312 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 6 01:44:00.110319 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 6 01:44:00.110325 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 6 01:44:00.110332 kernel: TSC deadline timer available Mar 6 01:44:00.110338 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 6 01:44:00.110345 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 6 01:44:00.110351 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 6 01:44:00.110370 kernel: kvm-guest: setup PV sched yield Mar 6 01:44:00.110380 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 6 01:44:00.110387 kernel: Booting paravirtualized kernel on KVM Mar 6 01:44:00.110394 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 6 01:44:00.110400 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 6 01:44:00.110407 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 6 01:44:00.110414 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 6 01:44:00.110420 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 6 01:44:00.110426 kernel: kvm-guest: PV spinlocks enabled Mar 6 01:44:00.110550 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 6 01:44:00.110564 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:44:00.110571 kernel: random: crng init done Mar 6 01:44:00.110578 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 6 01:44:00.110584 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 01:44:00.110591 kernel: Fallback order for Node 0: 0 Mar 6 01:44:00.110597 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 6 01:44:00.110604 kernel: Policy zone: DMA32 Mar 6 01:44:00.110610 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 01:44:00.110621 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 6 01:44:00.110627 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 6 01:44:00.110634 kernel: ftrace: allocating 37996 entries in 149 pages Mar 6 01:44:00.110640 kernel: ftrace: allocated 149 pages with 4 groups Mar 6 01:44:00.110647 kernel: Dynamic Preempt: voluntary Mar 6 01:44:00.110653 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 01:44:00.110660 kernel: rcu: RCU event tracing is enabled. Mar 6 01:44:00.110667 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 6 01:44:00.110674 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 01:44:00.110684 kernel: Rude variant of Tasks RCU enabled. Mar 6 01:44:00.110690 kernel: Tracing variant of Tasks RCU enabled. Mar 6 01:44:00.110697 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 01:44:00.110703 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 6 01:44:00.110725 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 6 01:44:00.110732 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 01:44:00.110739 kernel: Console: colour VGA+ 80x25 Mar 6 01:44:00.110745 kernel: printk: console [ttyS0] enabled Mar 6 01:44:00.110752 kernel: ACPI: Core revision 20230628 Mar 6 01:44:00.110758 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 6 01:44:00.110769 kernel: APIC: Switch to symmetric I/O mode setup Mar 6 01:44:00.110775 kernel: x2apic enabled Mar 6 01:44:00.110782 kernel: APIC: Switched APIC routing to: physical x2apic Mar 6 01:44:00.110788 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 6 01:44:00.110795 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 6 01:44:00.110801 kernel: kvm-guest: setup PV IPIs Mar 6 01:44:00.110808 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 6 01:44:00.110828 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 6 01:44:00.110857 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 6 01:44:00.110865 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 6 01:44:00.110871 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 6 01:44:00.110882 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 6 01:44:00.110889 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 6 01:44:00.110896 kernel: Spectre V2 : Mitigation: Retpolines Mar 6 01:44:00.110903 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 6 01:44:00.110910 kernel: Speculative Store Bypass: Vulnerable Mar 6 01:44:00.110920 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 6 01:44:00.110940 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 6 01:44:00.110948 kernel: active return thunk: srso_alias_return_thunk Mar 6 01:44:00.110954 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 6 01:44:00.110961 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 6 01:44:00.110968 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 6 01:44:00.110975 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 6 01:44:00.110982 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 6 01:44:00.110992 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 6 01:44:00.110999 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 6 01:44:00.111006 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 6 01:44:00.111013 kernel: Freeing SMP alternatives memory: 32K Mar 6 01:44:00.111020 kernel: pid_max: default: 32768 minimum: 301 Mar 6 01:44:00.111027 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 6 01:44:00.111033 kernel: landlock: Up and running. Mar 6 01:44:00.111040 kernel: SELinux: Initializing. Mar 6 01:44:00.111047 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:44:00.111062 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 01:44:00.111075 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 6 01:44:00.111087 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:44:00.111098 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:44:00.111110 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 6 01:44:00.111122 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 6 01:44:00.111134 kernel: signal: max sigframe size: 1776 Mar 6 01:44:00.111174 kernel: rcu: Hierarchical SRCU implementation. Mar 6 01:44:00.111188 kernel: rcu: Max phase no-delay instances is 400. Mar 6 01:44:00.111209 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 6 01:44:00.111220 kernel: smp: Bringing up secondary CPUs ... Mar 6 01:44:00.111233 kernel: smpboot: x86: Booting SMP configuration: Mar 6 01:44:00.111244 kernel: .... node #0, CPUs: #1 #2 #3 Mar 6 01:44:00.111251 kernel: smp: Brought up 1 node, 4 CPUs Mar 6 01:44:00.111257 kernel: smpboot: Max logical packages: 1 Mar 6 01:44:00.111264 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 6 01:44:00.111271 kernel: devtmpfs: initialized Mar 6 01:44:00.111278 kernel: x86/mm: Memory block size: 128MB Mar 6 01:44:00.111289 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 01:44:00.111296 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 6 01:44:00.111303 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 01:44:00.111310 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 01:44:00.111317 kernel: audit: initializing netlink subsys (disabled) Mar 6 01:44:00.111324 kernel: audit: type=2000 audit(1772761437.431:1): state=initialized audit_enabled=0 res=1 Mar 6 01:44:00.111330 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 01:44:00.111337 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 6 01:44:00.111344 kernel: cpuidle: using governor menu Mar 6 01:44:00.111354 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 01:44:00.111361 kernel: dca service started, version 1.12.1 Mar 6 01:44:00.111368 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 6 01:44:00.111374 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 6 01:44:00.111382 kernel: PCI: Using configuration type 1 for base access Mar 6 01:44:00.111388 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 6 01:44:00.111395 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 01:44:00.111402 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 01:44:00.111409 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 01:44:00.111419 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 01:44:00.111426 kernel: ACPI: Added _OSI(Module Device) Mar 6 01:44:00.111472 kernel: ACPI: Added _OSI(Processor Device) Mar 6 01:44:00.111480 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 01:44:00.111527 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 6 01:44:00.111536 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 6 01:44:00.111543 kernel: ACPI: Interpreter enabled Mar 6 01:44:00.111549 kernel: ACPI: PM: (supports S0 S3 S5) Mar 6 01:44:00.111556 kernel: ACPI: Using IOAPIC for interrupt routing Mar 6 01:44:00.111568 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 6 01:44:00.111575 kernel: PCI: Using E820 reservations for host bridge windows Mar 6 01:44:00.111582 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 6 01:44:00.111588 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 6 01:44:00.111967 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 6 01:44:00.112137 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 6 01:44:00.112288 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 6 01:44:00.112298 kernel: PCI host bridge to bus 0000:00 Mar 6 01:44:00.112582 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 6 01:44:00.112727 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 6 01:44:00.112894 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 6 01:44:00.113033 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 6 01:44:00.113167 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 6 01:44:00.113301 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 6 01:44:00.113484 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 6 01:44:00.113792 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 6 01:44:00.114082 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 6 01:44:00.114257 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 6 01:44:00.114410 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 6 01:44:00.114606 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 6 01:44:00.114756 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 6 01:44:00.115017 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 6 01:44:00.115402 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 6 01:44:00.115645 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 6 01:44:00.115898 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 6 01:44:00.116127 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 6 01:44:00.116282 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 6 01:44:00.116572 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 6 01:44:00.116748 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 6 01:44:00.116990 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 6 01:44:00.117270 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 6 01:44:00.117473 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 6 01:44:00.117691 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 6 01:44:00.117871 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 6 01:44:00.118067 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 6 01:44:00.118226 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 6 01:44:00.118552 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 6 01:44:00.118713 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 6 01:44:00.118891 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 6 01:44:00.119088 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 6 01:44:00.119238 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 6 01:44:00.119255 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 6 01:44:00.119262 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 6 01:44:00.119269 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 6 01:44:00.119276 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 6 01:44:00.119283 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 6 01:44:00.119290 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 6 01:44:00.119297 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 6 01:44:00.119304 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 6 01:44:00.119311 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 6 01:44:00.119321 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 6 01:44:00.119328 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 6 01:44:00.119334 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 6 01:44:00.119341 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 6 01:44:00.119348 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 6 01:44:00.119355 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 6 01:44:00.119362 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 6 01:44:00.119369 kernel: iommu: Default domain type: Translated Mar 6 01:44:00.119376 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 6 01:44:00.119385 kernel: PCI: Using ACPI for IRQ routing Mar 6 01:44:00.119393 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 6 01:44:00.119399 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 6 01:44:00.119406 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 6 01:44:00.119681 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 6 01:44:00.119859 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 6 01:44:00.120013 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 6 01:44:00.120023 kernel: vgaarb: loaded Mar 6 01:44:00.120036 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 6 01:44:00.120043 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 6 01:44:00.120050 kernel: clocksource: Switched to clocksource kvm-clock Mar 6 01:44:00.120057 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 01:44:00.120064 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 01:44:00.120071 kernel: pnp: PnP ACPI init Mar 6 01:44:00.120278 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 6 01:44:00.120291 kernel: pnp: PnP ACPI: found 6 devices Mar 6 01:44:00.120298 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 6 01:44:00.120310 kernel: NET: Registered PF_INET protocol family Mar 6 01:44:00.120317 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 6 01:44:00.120324 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 6 01:44:00.120331 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 01:44:00.120338 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 01:44:00.120345 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 6 01:44:00.120352 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 6 01:44:00.120359 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:44:00.120370 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 01:44:00.120377 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 01:44:00.120383 kernel: NET: Registered PF_XDP protocol family Mar 6 01:44:00.120623 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 6 01:44:00.120896 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 6 01:44:00.121038 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 6 01:44:00.121172 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 6 01:44:00.121306 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 6 01:44:00.121481 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 6 01:44:00.121499 kernel: PCI: CLS 0 bytes, default 64 Mar 6 01:44:00.121507 kernel: Initialise system trusted keyrings Mar 6 01:44:00.121514 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 6 01:44:00.121521 kernel: Key type asymmetric registered Mar 6 01:44:00.121528 kernel: Asymmetric key parser 'x509' registered Mar 6 01:44:00.121535 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 6 01:44:00.121542 kernel: io scheduler mq-deadline registered Mar 6 01:44:00.121549 kernel: io scheduler kyber registered Mar 6 01:44:00.121556 kernel: io scheduler bfq registered Mar 6 01:44:00.121566 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 6 01:44:00.121574 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 6 01:44:00.121581 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 6 01:44:00.121588 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 6 01:44:00.121595 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 01:44:00.121601 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 6 01:44:00.121609 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 6 01:44:00.121616 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 6 01:44:00.121622 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 6 01:44:00.121862 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 6 01:44:00.121875 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 6 01:44:00.122019 kernel: rtc_cmos 00:04: registered as rtc0 Mar 6 01:44:00.122158 kernel: rtc_cmos 00:04: setting system clock to 2026-03-06T01:43:59 UTC (1772761439) Mar 6 01:44:00.122297 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 6 01:44:00.122306 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 6 01:44:00.122313 kernel: NET: Registered PF_INET6 protocol family Mar 6 01:44:00.122320 kernel: Segment Routing with IPv6 Mar 6 01:44:00.122333 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 01:44:00.122340 kernel: NET: Registered PF_PACKET protocol family Mar 6 01:44:00.122346 kernel: Key type dns_resolver registered Mar 6 01:44:00.122353 kernel: IPI shorthand broadcast: enabled Mar 6 01:44:00.122361 kernel: sched_clock: Marking stable (2339013508, 483253225)->(3338675633, -516408900) Mar 6 01:44:00.122368 kernel: registered taskstats version 1 Mar 6 01:44:00.122375 kernel: Loading compiled-in X.509 certificates Mar 6 01:44:00.122382 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6d88f6264570591a57b3c9c1e1c99fca6c68b8ca' Mar 6 01:44:00.122389 kernel: Key type .fscrypt registered Mar 6 01:44:00.122398 kernel: Key type fscrypt-provisioning registered Mar 6 01:44:00.122406 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 01:44:00.122412 kernel: ima: Allocated hash algorithm: sha1 Mar 6 01:44:00.122420 kernel: ima: No architecture policies found Mar 6 01:44:00.122427 kernel: clk: Disabling unused clocks Mar 6 01:44:00.122655 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 6 01:44:00.122665 kernel: Write protecting the kernel read-only data: 36864k Mar 6 01:44:00.122672 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 6 01:44:00.122685 kernel: Run /init as init process Mar 6 01:44:00.122693 kernel: with arguments: Mar 6 01:44:00.122700 kernel: /init Mar 6 01:44:00.122706 kernel: with environment: Mar 6 01:44:00.122713 kernel: HOME=/ Mar 6 01:44:00.122720 kernel: TERM=linux Mar 6 01:44:00.122729 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:44:00.122737 systemd[1]: Detected virtualization kvm. Mar 6 01:44:00.122748 systemd[1]: Detected architecture x86-64. Mar 6 01:44:00.122756 systemd[1]: Running in initrd. Mar 6 01:44:00.122763 systemd[1]: No hostname configured, using default hostname. Mar 6 01:44:00.122770 systemd[1]: Hostname set to . Mar 6 01:44:00.122778 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:44:00.122785 systemd[1]: Queued start job for default target initrd.target. Mar 6 01:44:00.122792 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:44:00.122800 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:44:00.122811 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 01:44:00.122819 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:44:00.122827 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 01:44:00.122862 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 01:44:00.122871 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 01:44:00.122879 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 01:44:00.122886 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:44:00.122898 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:44:00.122905 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:44:00.122913 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:44:00.122920 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:44:00.122943 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:44:00.122954 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:44:00.122965 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:44:00.122972 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 01:44:00.122980 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 6 01:44:00.122988 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:44:00.122995 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:44:00.123003 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:44:00.123010 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:44:00.123018 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 01:44:00.123026 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:44:00.123036 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 01:44:00.123044 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 01:44:00.123052 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:44:00.123059 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:44:00.123067 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:44:00.123075 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 01:44:00.123083 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:44:00.123118 systemd-journald[195]: Collecting audit messages is disabled. Mar 6 01:44:00.123140 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 01:44:00.123149 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 01:44:00.123160 systemd-journald[195]: Journal started Mar 6 01:44:00.123175 systemd-journald[195]: Runtime Journal (/run/log/journal/95959e7dd26e40ba81b4c9533f335b84) is 6.0M, max 48.4M, 42.3M free. Mar 6 01:44:00.115990 systemd-modules-load[196]: Inserted module 'overlay' Mar 6 01:44:00.261708 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 01:44:00.261758 kernel: Bridge firewalling registered Mar 6 01:44:00.147761 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 6 01:44:00.275214 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:44:00.276022 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:44:00.282013 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:44:00.288619 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:44:00.313801 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:44:00.321393 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:44:00.337414 kernel: hrtimer: interrupt took 3850330 ns Mar 6 01:44:00.338799 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:44:00.361775 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:44:00.396795 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:44:00.405691 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:44:00.422041 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:44:00.438715 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 01:44:00.452587 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:44:00.466313 dracut-cmdline[229]: dracut-dracut-053 Mar 6 01:44:00.466652 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:44:00.473911 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6bcd99e714cc2f1b95dc0d61d9d762252de26a434f12074c16f59200c97ba9c Mar 6 01:44:00.528172 systemd-resolved[235]: Positive Trust Anchors: Mar 6 01:44:00.528209 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:44:00.528259 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:44:00.552091 systemd-resolved[235]: Defaulting to hostname 'linux'. Mar 6 01:44:00.556205 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:44:00.556425 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:44:00.573532 kernel: SCSI subsystem initialized Mar 6 01:44:00.583514 kernel: Loading iSCSI transport class v2.0-870. Mar 6 01:44:00.596540 kernel: iscsi: registered transport (tcp) Mar 6 01:44:00.620348 kernel: iscsi: registered transport (qla4xxx) Mar 6 01:44:00.620514 kernel: QLogic iSCSI HBA Driver Mar 6 01:44:00.700868 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 01:44:00.710709 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 01:44:00.740259 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 01:44:00.740302 kernel: device-mapper: uevent: version 1.0.3 Mar 6 01:44:00.740488 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 6 01:44:00.797746 kernel: raid6: avx2x4 gen() 31533 MB/s Mar 6 01:44:00.815511 kernel: raid6: avx2x2 gen() 27625 MB/s Mar 6 01:44:00.834378 kernel: raid6: avx2x1 gen() 23074 MB/s Mar 6 01:44:00.834475 kernel: raid6: using algorithm avx2x4 gen() 31533 MB/s Mar 6 01:44:00.853476 kernel: raid6: .... xor() 4475 MB/s, rmw enabled Mar 6 01:44:00.853542 kernel: raid6: using avx2x2 recovery algorithm Mar 6 01:44:00.878545 kernel: xor: automatically using best checksumming function avx Mar 6 01:44:01.029500 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 01:44:01.046633 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:44:01.061697 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:44:01.075609 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 6 01:44:01.080959 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:44:01.091645 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 01:44:01.107091 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Mar 6 01:44:01.149590 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:44:01.164739 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:44:01.349157 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:44:01.369085 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 01:44:01.426656 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 01:44:01.434585 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:44:01.442503 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:44:01.451951 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:44:01.469825 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 01:44:01.486013 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 6 01:44:01.486308 kernel: cryptd: max_cpu_qlen set to 1000 Mar 6 01:44:01.488624 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:44:01.500530 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 6 01:44:01.500740 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 01:44:01.500753 kernel: GPT:9289727 != 19775487 Mar 6 01:44:01.493517 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:44:01.530981 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 01:44:01.531014 kernel: GPT:9289727 != 19775487 Mar 6 01:44:01.531034 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 01:44:01.531049 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:44:01.531066 kernel: AVX2 version of gcm_enc/dec engaged. Mar 6 01:44:01.531082 kernel: AES CTR mode by8 optimization enabled Mar 6 01:44:01.515353 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:44:01.515541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:44:01.515695 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:44:01.516028 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:44:01.536745 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:44:01.545870 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:44:01.568969 kernel: libata version 3.00 loaded. Mar 6 01:44:01.587497 kernel: BTRFS: device fsid eccec0b1-0068-4620-ab61-f332f16460fa devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (468) Mar 6 01:44:01.594493 kernel: ahci 0000:00:1f.2: version 3.0 Mar 6 01:44:01.594730 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 6 01:44:01.594744 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 6 01:44:01.598466 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 6 01:44:01.598882 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 6 01:44:01.608802 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (465) Mar 6 01:44:01.613177 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 6 01:44:01.974399 kernel: scsi host0: ahci Mar 6 01:44:01.975769 kernel: scsi host1: ahci Mar 6 01:44:01.976165 kernel: scsi host2: ahci Mar 6 01:44:01.991228 kernel: scsi host3: ahci Mar 6 01:44:01.991735 kernel: scsi host4: ahci Mar 6 01:44:01.992107 kernel: scsi host5: ahci Mar 6 01:44:01.996374 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 6 01:44:01.996395 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 6 01:44:01.996414 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 6 01:44:01.996500 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 6 01:44:01.996521 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 6 01:44:01.996553 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 6 01:44:01.996572 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 6 01:44:01.996592 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 6 01:44:01.996607 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 6 01:44:01.996624 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 6 01:44:01.996639 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 6 01:44:01.996655 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 6 01:44:01.996673 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 6 01:44:01.996691 kernel: ata3.00: applying bridge limits Mar 6 01:44:01.996713 kernel: ata3.00: configured for UDMA/100 Mar 6 01:44:01.996729 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 6 01:44:01.992150 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:44:02.002868 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:44:02.011626 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 6 01:44:02.016562 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 6 01:44:02.034718 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 01:44:02.042114 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 01:44:02.052597 disk-uuid[565]: Primary Header is updated. Mar 6 01:44:02.052597 disk-uuid[565]: Secondary Entries is updated. Mar 6 01:44:02.052597 disk-uuid[565]: Secondary Header is updated. Mar 6 01:44:02.062262 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:44:02.103121 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 6 01:44:02.105408 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 6 01:44:02.109223 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:44:02.132504 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 6 01:44:03.105548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 6 01:44:03.105968 disk-uuid[570]: The operation has completed successfully. Mar 6 01:44:03.148827 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 01:44:03.149074 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 01:44:03.180728 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 01:44:03.187745 sh[598]: Success Mar 6 01:44:03.205602 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 6 01:44:03.251371 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 01:44:03.266629 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 01:44:03.270896 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 01:44:03.301129 kernel: BTRFS info (device dm-0): first mount of filesystem eccec0b1-0068-4620-ab61-f332f16460fa Mar 6 01:44:03.301169 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:44:03.301188 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 6 01:44:03.306925 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 6 01:44:03.306958 kernel: BTRFS info (device dm-0): using free space tree Mar 6 01:44:03.318718 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 01:44:03.321657 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 01:44:03.333622 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 01:44:03.336934 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 01:44:03.352063 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:44:03.352095 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:44:03.352107 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:44:03.359529 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:44:03.383557 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 6 01:44:03.389155 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:44:03.395473 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 01:44:03.403642 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 01:44:03.703943 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:44:03.705251 ignition[693]: Ignition 2.19.0 Mar 6 01:44:03.705270 ignition[693]: Stage: fetch-offline Mar 6 01:44:03.722718 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:44:03.705362 ignition[693]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:44:03.705376 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:44:03.705582 ignition[693]: parsed url from cmdline: "" Mar 6 01:44:03.705587 ignition[693]: no config URL provided Mar 6 01:44:03.705594 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 01:44:03.705609 ignition[693]: no config at "/usr/lib/ignition/user.ign" Mar 6 01:44:03.705650 ignition[693]: op(1): [started] loading QEMU firmware config module Mar 6 01:44:03.705661 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 6 01:44:03.717957 ignition[693]: op(1): [finished] loading QEMU firmware config module Mar 6 01:44:03.761126 systemd-networkd[786]: lo: Link UP Mar 6 01:44:03.761148 systemd-networkd[786]: lo: Gained carrier Mar 6 01:44:03.763283 systemd-networkd[786]: Enumeration completed Mar 6 01:44:03.763763 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:44:03.764171 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:44:03.764176 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:44:03.769116 systemd[1]: Reached target network.target - Network. Mar 6 01:44:03.770829 systemd-networkd[786]: eth0: Link UP Mar 6 01:44:03.770936 systemd-networkd[786]: eth0: Gained carrier Mar 6 01:44:03.770946 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:44:03.809567 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:44:04.164889 ignition[693]: parsing config with SHA512: 36c75aae6b3dbd62f96b739f49db50855c2b132e68762a677296edaaab4b69f81f3edcaf22b0a76b18c029c9c9ed7513cd9843bdc1bf175ebed748cdf29ae319 Mar 6 01:44:04.194565 unknown[693]: fetched base config from "system" Mar 6 01:44:04.195313 ignition[693]: fetch-offline: fetch-offline passed Mar 6 01:44:04.194599 unknown[693]: fetched user config from "qemu" Mar 6 01:44:04.195912 ignition[693]: Ignition finished successfully Mar 6 01:44:04.205276 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:44:04.208895 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 6 01:44:04.218682 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 01:44:04.258336 ignition[790]: Ignition 2.19.0 Mar 6 01:44:04.258391 ignition[790]: Stage: kargs Mar 6 01:44:04.258694 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:44:04.258709 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:44:04.268571 ignition[790]: kargs: kargs passed Mar 6 01:44:04.268669 ignition[790]: Ignition finished successfully Mar 6 01:44:04.293114 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 01:44:04.307686 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 01:44:04.514892 ignition[798]: Ignition 2.19.0 Mar 6 01:44:04.514933 ignition[798]: Stage: disks Mar 6 01:44:04.521515 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 01:44:04.515495 ignition[798]: no configs at "/usr/lib/ignition/base.d" Mar 6 01:44:04.526618 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 01:44:04.515522 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:44:04.534544 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 01:44:04.517527 ignition[798]: disks: disks passed Mar 6 01:44:04.540272 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:44:04.517646 ignition[798]: Ignition finished successfully Mar 6 01:44:04.545663 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:44:04.546604 systemd-resolved[235]: Detected conflict on linux IN A 10.0.0.144 Mar 6 01:44:04.546619 systemd-resolved[235]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Mar 6 01:44:04.548573 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:44:04.570936 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 01:44:04.604908 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 6 01:44:04.609556 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 01:44:04.615572 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 01:44:04.750522 kernel: EXT4-fs (vda9): mounted filesystem 6fb83788-0471-4e89-b45f-3a7586a627a9 r/w with ordered data mode. Quota mode: none. Mar 6 01:44:04.751210 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 01:44:04.754715 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 01:44:04.781533 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:44:04.785677 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 01:44:04.803759 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Mar 6 01:44:04.803797 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:44:04.803815 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:44:04.803881 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:44:04.791493 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 01:44:04.815083 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:44:04.791571 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 01:44:04.791616 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:44:04.817943 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:44:04.825172 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 01:44:04.842660 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 01:44:04.902888 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 01:44:04.909833 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Mar 6 01:44:04.915529 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 01:44:04.921329 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 01:44:05.055835 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 01:44:05.076670 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 01:44:05.078532 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 01:44:05.101531 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 01:44:05.119379 kernel: BTRFS info (device vda6): last unmount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:44:05.223908 systemd-networkd[786]: eth0: Gained IPv6LL Mar 6 01:44:05.239384 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 01:44:05.345331 ignition[929]: INFO : Ignition 2.19.0 Mar 6 01:44:05.345331 ignition[929]: INFO : Stage: mount Mar 6 01:44:05.368692 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:44:05.378891 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:44:05.378891 ignition[929]: INFO : mount: mount passed Mar 6 01:44:05.378891 ignition[929]: INFO : Ignition finished successfully Mar 6 01:44:05.398749 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 01:44:05.445898 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 01:44:05.517674 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 01:44:05.536519 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Mar 6 01:44:05.543701 kernel: BTRFS info (device vda6): first mount of filesystem dcd455b6-671f-4d9f-a5ce-de07977c88a5 Mar 6 01:44:05.543730 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 6 01:44:05.543742 kernel: BTRFS info (device vda6): using free space tree Mar 6 01:44:05.551559 kernel: BTRFS info (device vda6): auto enabling async discard Mar 6 01:44:05.554569 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 01:44:05.663827 ignition[960]: INFO : Ignition 2.19.0 Mar 6 01:44:05.663827 ignition[960]: INFO : Stage: files Mar 6 01:44:05.667556 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:44:05.667556 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:44:05.667556 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Mar 6 01:44:05.682190 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 01:44:05.682190 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 01:44:05.710710 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 01:44:05.714492 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 01:44:05.718470 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 01:44:05.718470 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 6 01:44:05.718470 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 6 01:44:05.718470 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:44:05.718470 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 6 01:44:05.715666 unknown[960]: wrote ssh authorized keys file for user: core Mar 6 01:44:05.825533 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 6 01:44:06.158621 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 6 01:44:06.158621 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 6 01:44:06.168503 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 01:44:06.168503 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:44:06.168503 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 01:44:06.168503 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:44:06.168503 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 01:44:06.168503 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:44:06.168503 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 01:44:06.168503 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:44:06.168503 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 01:44:06.168503 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:44:06.168503 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:44:06.168503 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:44:06.168503 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 6 01:44:06.655957 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 6 01:44:08.043633 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 6 01:44:08.043633 ignition[960]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 6 01:44:08.053915 ignition[960]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 6 01:44:08.061037 ignition[960]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 6 01:44:08.061037 ignition[960]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 6 01:44:08.061037 ignition[960]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 6 01:44:08.083185 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:44:08.087942 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 01:44:08.087942 ignition[960]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 6 01:44:08.087942 ignition[960]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Mar 6 01:44:08.099126 ignition[960]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:44:08.105038 ignition[960]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 6 01:44:08.105038 ignition[960]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Mar 6 01:44:08.105038 ignition[960]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Mar 6 01:44:08.221154 ignition[960]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:44:08.237000 ignition[960]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 6 01:44:08.241151 ignition[960]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Mar 6 01:44:08.241151 ignition[960]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Mar 6 01:44:08.241151 ignition[960]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 01:44:08.241151 ignition[960]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:44:08.241151 ignition[960]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 01:44:08.241151 ignition[960]: INFO : files: files passed Mar 6 01:44:08.241151 ignition[960]: INFO : Ignition finished successfully Mar 6 01:44:08.258368 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 01:44:08.278802 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 01:44:08.291146 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 01:44:08.292116 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 01:44:08.292253 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 01:44:08.321187 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Mar 6 01:44:08.329791 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:44:08.329791 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:44:08.346422 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 01:44:08.332698 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:44:08.339311 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 01:44:08.418313 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 01:44:08.537592 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 01:44:08.537825 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 01:44:08.545870 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 01:44:08.551552 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 01:44:08.554917 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 01:44:08.570244 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 01:44:08.596722 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:44:08.612685 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 01:44:08.624675 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:44:08.627926 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:44:08.634619 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 01:44:08.640304 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 01:44:08.640498 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 01:44:08.646085 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 01:44:08.652035 systemd[1]: Stopped target basic.target - Basic System. Mar 6 01:44:08.657526 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 01:44:08.662769 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 01:44:08.668198 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 01:44:08.680729 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 01:44:08.686887 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 01:44:08.692791 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 01:44:08.698575 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 01:44:08.704980 systemd[1]: Stopped target swap.target - Swaps. Mar 6 01:44:08.709706 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 01:44:08.709907 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 01:44:08.716077 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:44:08.719702 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:44:08.725870 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 01:44:08.726140 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:44:08.732054 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 01:44:08.732212 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 01:44:08.738104 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 01:44:08.738263 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 01:44:08.744115 systemd[1]: Stopped target paths.target - Path Units. Mar 6 01:44:08.749412 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 01:44:08.753585 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:44:08.758829 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 01:44:08.763877 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 01:44:08.770310 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 01:44:08.770618 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 01:44:08.790113 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 01:44:08.790300 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 01:44:08.796254 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 01:44:08.796550 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 01:44:08.801508 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 01:44:08.801643 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 01:44:08.822684 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 01:44:08.828378 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 01:44:08.833609 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 01:44:08.833788 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:44:08.840250 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 01:44:08.840406 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 01:44:08.849547 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 01:44:08.849679 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 01:44:08.931380 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 01:44:09.215619 ignition[1014]: INFO : Ignition 2.19.0 Mar 6 01:44:09.215619 ignition[1014]: INFO : Stage: umount Mar 6 01:44:09.215619 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 01:44:09.215619 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 6 01:44:09.233534 ignition[1014]: INFO : umount: umount passed Mar 6 01:44:09.233534 ignition[1014]: INFO : Ignition finished successfully Mar 6 01:44:09.219224 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 01:44:09.219374 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 01:44:09.223885 systemd[1]: Stopped target network.target - Network. Mar 6 01:44:09.228001 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 01:44:09.228121 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 01:44:09.233599 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 01:44:09.233663 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 01:44:09.238335 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 01:44:09.238397 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 01:44:09.243177 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 01:44:09.243236 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 01:44:09.248373 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 01:44:09.254211 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 01:44:09.260373 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 01:44:09.260532 systemd-networkd[786]: eth0: DHCPv6 lease lost Mar 6 01:44:09.260644 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 01:44:09.267542 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 01:44:09.267783 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 01:44:09.279782 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 01:44:09.280200 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 01:44:09.292818 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 01:44:09.292908 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:44:09.296024 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 01:44:09.296086 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 01:44:09.315615 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 01:44:09.319088 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 01:44:09.319170 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 01:44:09.322639 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 01:44:09.322697 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:44:09.327069 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 01:44:09.327151 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 01:44:09.332299 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 01:44:09.332365 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:44:09.335607 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:44:09.350879 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 01:44:09.351094 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:44:09.357036 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 01:44:09.357114 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 01:44:09.361088 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 01:44:09.361161 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:44:09.366822 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 01:44:09.366918 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 01:44:09.374291 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 01:44:09.374373 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 01:44:09.397873 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 01:44:09.397989 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 01:44:09.417912 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 01:44:09.420889 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 01:44:09.420980 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:44:09.426382 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 6 01:44:09.426494 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:44:09.432493 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 01:44:09.432549 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:44:09.438632 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 01:44:09.438695 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:44:09.444224 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 01:44:09.538941 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 6 01:44:09.444376 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 01:44:09.449503 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 01:44:09.449634 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 01:44:09.455835 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 01:44:09.471134 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 01:44:09.484530 systemd[1]: Switching root. Mar 6 01:44:09.558217 systemd-journald[195]: Journal stopped Mar 6 01:44:11.079055 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 01:44:11.079150 kernel: SELinux: policy capability open_perms=1 Mar 6 01:44:11.079169 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 01:44:11.079187 kernel: SELinux: policy capability always_check_network=0 Mar 6 01:44:11.079243 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 01:44:11.079269 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 01:44:11.079299 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 01:44:11.079317 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 01:44:11.079335 kernel: audit: type=1403 audit(1772761449.847:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 01:44:11.079354 systemd[1]: Successfully loaded SELinux policy in 129.897ms. Mar 6 01:44:11.079393 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.558ms. Mar 6 01:44:11.079412 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 6 01:44:11.079486 systemd[1]: Detected virtualization kvm. Mar 6 01:44:11.079508 systemd[1]: Detected architecture x86-64. Mar 6 01:44:11.079533 systemd[1]: Detected first boot. Mar 6 01:44:11.079551 systemd[1]: Initializing machine ID from VM UUID. Mar 6 01:44:11.079569 zram_generator::config[1075]: No configuration found. Mar 6 01:44:11.079590 systemd[1]: Populated /etc with preset unit settings. Mar 6 01:44:11.079608 systemd[1]: Queued start job for default target multi-user.target. Mar 6 01:44:11.079626 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 6 01:44:11.079644 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 01:44:11.079663 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 01:44:11.079686 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 01:44:11.079704 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 01:44:11.079722 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 01:44:11.079807 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 01:44:11.079827 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 01:44:11.079882 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 01:44:11.079903 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 01:44:11.079921 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 01:44:11.079939 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 01:44:11.079962 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 01:44:11.079989 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 01:44:11.080013 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 01:44:11.080036 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 01:44:11.080056 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 01:44:11.080074 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 01:44:11.080092 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 01:44:11.080110 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 01:44:11.080129 systemd[1]: Reached target slices.target - Slice Units. Mar 6 01:44:11.080158 systemd[1]: Reached target swap.target - Swaps. Mar 6 01:44:11.080177 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 01:44:11.080195 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 01:44:11.080213 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 01:44:11.080232 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 6 01:44:11.080251 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 01:44:11.080270 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 01:44:11.080288 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 01:44:11.080307 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 01:44:11.080329 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 01:44:11.080348 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 01:44:11.080365 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 01:44:11.080385 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:44:11.080407 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 01:44:11.080426 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 01:44:11.080502 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 01:44:11.080522 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 01:44:11.080546 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:44:11.080565 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 01:44:11.080582 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 01:44:11.080600 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:44:11.080619 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:44:11.080636 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:44:11.080654 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 01:44:11.080672 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:44:11.080690 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 01:44:11.080713 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 6 01:44:11.080735 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 6 01:44:11.080753 kernel: fuse: init (API version 7.39) Mar 6 01:44:11.080771 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 01:44:11.080789 kernel: loop: module loaded Mar 6 01:44:11.080806 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 01:44:11.080823 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 01:44:11.080877 kernel: ACPI: bus type drm_connector registered Mar 6 01:44:11.080906 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 01:44:11.080951 systemd-journald[1171]: Collecting audit messages is disabled. Mar 6 01:44:11.080986 systemd-journald[1171]: Journal started Mar 6 01:44:11.081015 systemd-journald[1171]: Runtime Journal (/run/log/journal/95959e7dd26e40ba81b4c9533f335b84) is 6.0M, max 48.4M, 42.3M free. Mar 6 01:44:11.094519 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 01:44:11.102498 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:44:11.107488 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 01:44:11.111252 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 01:44:11.114493 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 01:44:11.117787 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 01:44:11.120785 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 01:44:11.124067 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 01:44:11.127427 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 01:44:11.130831 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 01:44:11.134677 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 01:44:11.138784 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 01:44:11.139111 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 01:44:11.143056 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:44:11.143346 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:44:11.147169 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:44:11.147519 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:44:11.150765 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:44:11.151092 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:44:11.154801 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 01:44:11.155124 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 01:44:11.158425 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:44:11.158840 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:44:11.164123 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 01:44:11.168249 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 01:44:11.172134 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 01:44:11.188622 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 01:44:11.202640 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 01:44:11.208717 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 01:44:11.212923 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 01:44:11.215276 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 01:44:11.223652 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 01:44:11.227395 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:44:11.236681 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 01:44:11.241976 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:44:11.244177 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 01:44:11.249633 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 01:44:11.256097 systemd-journald[1171]: Time spent on flushing to /var/log/journal/95959e7dd26e40ba81b4c9533f335b84 is 25.644ms for 933 entries. Mar 6 01:44:11.256097 systemd-journald[1171]: System Journal (/var/log/journal/95959e7dd26e40ba81b4c9533f335b84) is 8.0M, max 195.6M, 187.6M free. Mar 6 01:44:11.296741 systemd-journald[1171]: Received client request to flush runtime journal. Mar 6 01:44:11.260516 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 01:44:11.265667 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 01:44:11.269964 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 01:44:11.273987 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 01:44:11.284600 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 01:44:11.297645 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 6 01:44:11.304590 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 01:44:11.309066 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 01:44:11.313025 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Mar 6 01:44:11.313043 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Mar 6 01:44:11.317015 udevadm[1222]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 6 01:44:11.320894 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 01:44:11.333715 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 01:44:11.366505 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 01:44:11.381633 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 01:44:11.407203 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Mar 6 01:44:11.407236 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Mar 6 01:44:11.415166 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 01:44:11.685987 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 01:44:11.701598 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 01:44:11.729989 systemd-udevd[1242]: Using default interface naming scheme 'v255'. Mar 6 01:44:11.752686 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 01:44:11.766655 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 01:44:11.787632 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 01:44:11.799794 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 6 01:44:11.827547 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1258) Mar 6 01:44:11.846008 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 01:44:11.888003 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 6 01:44:11.914536 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 6 01:44:11.918171 systemd-networkd[1250]: lo: Link UP Mar 6 01:44:11.918992 systemd-networkd[1250]: lo: Gained carrier Mar 6 01:44:11.921506 kernel: ACPI: button: Power Button [PWRF] Mar 6 01:44:11.921634 systemd-networkd[1250]: Enumeration completed Mar 6 01:44:11.921756 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 01:44:11.922728 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:44:11.922737 systemd-networkd[1250]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 01:44:11.924086 systemd-networkd[1250]: eth0: Link UP Mar 6 01:44:11.924134 systemd-networkd[1250]: eth0: Gained carrier Mar 6 01:44:11.924197 systemd-networkd[1250]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 01:44:11.930493 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 6 01:44:11.941267 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 6 01:44:11.941535 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 6 01:44:11.938515 systemd-networkd[1250]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 6 01:44:11.945649 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 01:44:11.963530 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 6 01:44:11.965769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 01:44:11.980479 kernel: mousedev: PS/2 mouse device common for all mice Mar 6 01:44:12.089489 kernel: kvm_amd: TSC scaling supported Mar 6 01:44:12.089567 kernel: kvm_amd: Nested Virtualization enabled Mar 6 01:44:12.089582 kernel: kvm_amd: Nested Paging enabled Mar 6 01:44:12.089622 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 6 01:44:12.089635 kernel: kvm_amd: PMU virtualization is disabled Mar 6 01:44:12.127872 kernel: EDAC MC: Ver: 3.0.0 Mar 6 01:44:12.156984 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 6 01:44:12.212679 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 6 01:44:12.216280 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 01:44:12.223642 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:44:12.262644 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 6 01:44:12.266613 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 01:44:12.279621 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 6 01:44:12.285590 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 6 01:44:12.321379 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 6 01:44:12.324746 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 01:44:12.327808 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 01:44:12.327835 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 01:44:12.330344 systemd[1]: Reached target machines.target - Containers. Mar 6 01:44:12.333818 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 6 01:44:12.350712 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 01:44:12.355608 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 01:44:12.359010 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:44:12.360645 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 01:44:12.365416 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 6 01:44:12.372629 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 01:44:12.373638 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 01:44:12.380776 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 01:44:12.388750 kernel: loop0: detected capacity change from 0 to 228704 Mar 6 01:44:12.405006 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 01:44:12.406026 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 6 01:44:12.413588 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 01:44:12.444580 kernel: loop1: detected capacity change from 0 to 142488 Mar 6 01:44:12.496501 kernel: loop2: detected capacity change from 0 to 140768 Mar 6 01:44:12.546498 kernel: loop3: detected capacity change from 0 to 228704 Mar 6 01:44:12.559485 kernel: loop4: detected capacity change from 0 to 142488 Mar 6 01:44:12.579506 kernel: loop5: detected capacity change from 0 to 140768 Mar 6 01:44:12.593399 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 6 01:44:12.594377 (sd-merge)[1313]: Merged extensions into '/usr'. Mar 6 01:44:12.598919 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 01:44:12.598956 systemd[1]: Reloading... Mar 6 01:44:12.676350 zram_generator::config[1341]: No configuration found. Mar 6 01:44:12.694074 ldconfig[1296]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 01:44:12.819807 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:44:12.895075 systemd[1]: Reloading finished in 295 ms. Mar 6 01:44:12.926285 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 01:44:12.931076 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 01:44:12.950722 systemd[1]: Starting ensure-sysext.service... Mar 6 01:44:12.954804 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 01:44:12.960267 systemd[1]: Reloading requested from client PID 1385 ('systemctl') (unit ensure-sysext.service)... Mar 6 01:44:12.960302 systemd[1]: Reloading... Mar 6 01:44:12.985021 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 01:44:12.985518 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 01:44:12.986743 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 01:44:12.987069 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Mar 6 01:44:12.987181 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Mar 6 01:44:12.995987 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:44:12.996028 systemd-tmpfiles[1386]: Skipping /boot Mar 6 01:44:13.023721 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 01:44:13.023744 systemd-tmpfiles[1386]: Skipping /boot Mar 6 01:44:13.030539 zram_generator::config[1414]: No configuration found. Mar 6 01:44:13.164228 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:44:13.233757 systemd[1]: Reloading finished in 272 ms. Mar 6 01:44:13.257218 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 01:44:13.301973 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 6 01:44:13.310345 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 01:44:13.316381 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 01:44:13.325890 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 01:44:13.333747 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 01:44:13.340918 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:44:13.341688 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:44:13.344882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:44:13.350797 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:44:13.365946 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:44:13.370484 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:44:13.370762 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:44:13.375508 augenrules[1482]: No rules Mar 6 01:44:13.372579 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 6 01:44:13.377930 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 01:44:13.382713 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:44:13.383076 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:44:13.387498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:44:13.387814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:44:13.392427 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:44:13.392744 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:44:13.405091 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:44:13.406642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:44:13.424938 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:44:13.431163 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:44:13.437814 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:44:13.440789 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:44:13.443122 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 01:44:13.446665 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:44:13.449246 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 01:44:13.454051 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 01:44:13.459107 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:44:13.459626 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:44:13.463785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:44:13.464140 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:44:13.468570 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:44:13.468918 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:44:13.469263 systemd-resolved[1472]: Positive Trust Anchors: Mar 6 01:44:13.469309 systemd-resolved[1472]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 01:44:13.469362 systemd-resolved[1472]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 01:44:13.472968 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 01:44:13.475799 systemd-resolved[1472]: Defaulting to hostname 'linux'. Mar 6 01:44:13.479670 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 01:44:13.488796 systemd[1]: Reached target network.target - Network. Mar 6 01:44:13.491519 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 01:44:13.495064 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:44:13.495313 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 01:44:13.505914 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 01:44:13.512061 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 01:44:13.517888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 01:44:13.524108 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 01:44:13.528134 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 01:44:13.528474 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 01:44:13.528678 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 6 01:44:13.531078 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 01:44:13.531513 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 01:44:13.536829 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 01:44:13.537197 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 01:44:13.541635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 01:44:13.541992 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 01:44:13.545704 systemd-networkd[1250]: eth0: Gained IPv6LL Mar 6 01:44:13.547964 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 01:44:13.548364 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 01:44:13.554262 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 01:44:13.561213 systemd[1]: Finished ensure-sysext.service. Mar 6 01:44:13.570037 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 01:44:13.573551 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 01:44:13.573681 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 01:44:13.588832 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 6 01:44:13.679135 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 6 01:44:14.257427 systemd-resolved[1472]: Clock change detected. Flushing caches. Mar 6 01:44:14.257523 systemd-timesyncd[1533]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 6 01:44:14.257589 systemd-timesyncd[1533]: Initial clock synchronization to Fri 2026-03-06 01:44:14.257299 UTC. Mar 6 01:44:14.260607 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 01:44:14.264322 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 01:44:14.267795 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 01:44:14.271383 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 01:44:14.274712 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 01:44:14.274762 systemd[1]: Reached target paths.target - Path Units. Mar 6 01:44:14.277137 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 01:44:14.280032 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 01:44:14.282960 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 01:44:14.286087 systemd[1]: Reached target timers.target - Timer Units. Mar 6 01:44:14.289234 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 01:44:14.294938 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 01:44:14.299711 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 01:44:14.304346 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 01:44:14.308594 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 01:44:14.311129 systemd[1]: Reached target basic.target - Basic System. Mar 6 01:44:14.314060 systemd[1]: System is tainted: cgroupsv1 Mar 6 01:44:14.314125 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:44:14.314150 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 01:44:14.316013 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 01:44:14.320240 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 6 01:44:14.324354 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 01:44:14.330590 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 01:44:14.334700 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 01:44:14.337542 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 01:44:14.339648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:44:14.341953 jq[1541]: false Mar 6 01:44:14.348719 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 01:44:14.356707 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 01:44:14.361672 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 01:44:14.371961 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 01:44:14.379501 extend-filesystems[1543]: Found loop3 Mar 6 01:44:14.379501 extend-filesystems[1543]: Found loop4 Mar 6 01:44:14.379501 extend-filesystems[1543]: Found loop5 Mar 6 01:44:14.379501 extend-filesystems[1543]: Found sr0 Mar 6 01:44:14.379501 extend-filesystems[1543]: Found vda Mar 6 01:44:14.379501 extend-filesystems[1543]: Found vda1 Mar 6 01:44:14.379501 extend-filesystems[1543]: Found vda2 Mar 6 01:44:14.379501 extend-filesystems[1543]: Found vda3 Mar 6 01:44:14.379501 extend-filesystems[1543]: Found usr Mar 6 01:44:14.379501 extend-filesystems[1543]: Found vda4 Mar 6 01:44:14.379501 extend-filesystems[1543]: Found vda6 Mar 6 01:44:14.379501 extend-filesystems[1543]: Found vda7 Mar 6 01:44:14.379501 extend-filesystems[1543]: Found vda9 Mar 6 01:44:14.379501 extend-filesystems[1543]: Checking size of /dev/vda9 Mar 6 01:44:14.456144 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 6 01:44:14.456215 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1254) Mar 6 01:44:14.456254 extend-filesystems[1543]: Resized partition /dev/vda9 Mar 6 01:44:14.381911 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 01:44:14.394215 dbus-daemon[1539]: [system] SELinux support is enabled Mar 6 01:44:14.460800 extend-filesystems[1575]: resize2fs 1.47.1 (20-May-2024) Mar 6 01:44:14.404931 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 01:44:14.407669 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 01:44:14.420683 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 01:44:14.447765 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 01:44:14.453024 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 01:44:14.473280 jq[1577]: true Mar 6 01:44:14.505115 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 6 01:44:14.464034 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 01:44:14.505388 update_engine[1573]: I20260306 01:44:14.482354 1573 main.cc:92] Flatcar Update Engine starting Mar 6 01:44:14.505388 update_engine[1573]: I20260306 01:44:14.491991 1573 update_check_scheduler.cc:74] Next update check in 3m1s Mar 6 01:44:14.505959 extend-filesystems[1575]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 6 01:44:14.505959 extend-filesystems[1575]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 6 01:44:14.505959 extend-filesystems[1575]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 6 01:44:14.464536 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 01:44:14.542074 extend-filesystems[1543]: Resized filesystem in /dev/vda9 Mar 6 01:44:14.480816 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 01:44:14.488031 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 01:44:14.493163 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 01:44:14.499692 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 01:44:14.500183 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 01:44:14.509320 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 01:44:14.509813 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 01:44:14.544695 (ntainerd)[1589]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 01:44:14.554045 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 6 01:44:14.554597 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 6 01:44:14.561528 jq[1588]: true Mar 6 01:44:14.563926 systemd-logind[1567]: Watching system buttons on /dev/input/event1 (Power Button) Mar 6 01:44:14.564297 systemd-logind[1567]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 6 01:44:14.566669 systemd-logind[1567]: New seat seat0. Mar 6 01:44:14.574013 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 01:44:14.602750 dbus-daemon[1539]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 6 01:44:14.610069 tar[1586]: linux-amd64/LICENSE Mar 6 01:44:14.610069 tar[1586]: linux-amd64/helm Mar 6 01:44:14.619622 systemd[1]: Started update-engine.service - Update Engine. Mar 6 01:44:14.633759 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 01:44:14.641716 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 01:44:14.642057 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 01:44:14.642277 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 01:44:14.647056 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 01:44:14.647195 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 01:44:14.653233 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 01:44:14.663046 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 01:44:14.667909 bash[1623]: Updated "/home/core/.ssh/authorized_keys" Mar 6 01:44:14.676910 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 01:44:14.683782 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 6 01:44:14.710323 locksmithd[1624]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 01:44:14.853569 containerd[1589]: time="2026-03-06T01:44:14.851520715Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 6 01:44:14.868388 sshd_keygen[1576]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 01:44:14.893617 containerd[1589]: time="2026-03-06T01:44:14.893284082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:44:14.897600 containerd[1589]: time="2026-03-06T01:44:14.897548945Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:44:14.897778 containerd[1589]: time="2026-03-06T01:44:14.897751554Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 6 01:44:14.897948 containerd[1589]: time="2026-03-06T01:44:14.897923044Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 6 01:44:14.900298 containerd[1589]: time="2026-03-06T01:44:14.898340874Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 6 01:44:14.900298 containerd[1589]: time="2026-03-06T01:44:14.898372223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 6 01:44:14.900298 containerd[1589]: time="2026-03-06T01:44:14.898543793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:44:14.900298 containerd[1589]: time="2026-03-06T01:44:14.898570062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:44:14.900298 containerd[1589]: time="2026-03-06T01:44:14.898992891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:44:14.900298 containerd[1589]: time="2026-03-06T01:44:14.899023940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 6 01:44:14.900298 containerd[1589]: time="2026-03-06T01:44:14.899043637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:44:14.900298 containerd[1589]: time="2026-03-06T01:44:14.899058805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 6 01:44:14.900298 containerd[1589]: time="2026-03-06T01:44:14.899193877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:44:14.900298 containerd[1589]: time="2026-03-06T01:44:14.899593704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 6 01:44:14.900298 containerd[1589]: time="2026-03-06T01:44:14.899934741Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 6 01:44:14.900729 containerd[1589]: time="2026-03-06T01:44:14.899958845Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 6 01:44:14.900729 containerd[1589]: time="2026-03-06T01:44:14.900098967Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 6 01:44:14.900729 containerd[1589]: time="2026-03-06T01:44:14.900183285Z" level=info msg="metadata content store policy set" policy=shared Mar 6 01:44:14.907414 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 01:44:14.915827 containerd[1589]: time="2026-03-06T01:44:14.913689148Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 6 01:44:14.915827 containerd[1589]: time="2026-03-06T01:44:14.913753127Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 6 01:44:14.915827 containerd[1589]: time="2026-03-06T01:44:14.913771472Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 6 01:44:14.915827 containerd[1589]: time="2026-03-06T01:44:14.913786450Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 6 01:44:14.915827 containerd[1589]: time="2026-03-06T01:44:14.913800075Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 6 01:44:14.915827 containerd[1589]: time="2026-03-06T01:44:14.913991683Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 6 01:44:14.915827 containerd[1589]: time="2026-03-06T01:44:14.914248653Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 6 01:44:14.915827 containerd[1589]: time="2026-03-06T01:44:14.914378896Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 6 01:44:14.915827 containerd[1589]: time="2026-03-06T01:44:14.914395497Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 6 01:44:14.915827 containerd[1589]: time="2026-03-06T01:44:14.914407159Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 6 01:44:14.915827 containerd[1589]: time="2026-03-06T01:44:14.914419953Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 6 01:44:14.915827 containerd[1589]: time="2026-03-06T01:44:14.914432567Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 6 01:44:14.915827 containerd[1589]: time="2026-03-06T01:44:14.914510011Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 6 01:44:14.915827 containerd[1589]: time="2026-03-06T01:44:14.914525580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 6 01:44:14.918475 containerd[1589]: time="2026-03-06T01:44:14.914544536Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 6 01:44:14.918475 containerd[1589]: time="2026-03-06T01:44:14.914556418Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 6 01:44:14.918475 containerd[1589]: time="2026-03-06T01:44:14.914568270Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 6 01:44:14.918475 containerd[1589]: time="2026-03-06T01:44:14.914580934Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 6 01:44:14.918475 containerd[1589]: time="2026-03-06T01:44:14.914599538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918475 containerd[1589]: time="2026-03-06T01:44:14.914612242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918475 containerd[1589]: time="2026-03-06T01:44:14.914629264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918475 containerd[1589]: time="2026-03-06T01:44:14.914640755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918475 containerd[1589]: time="2026-03-06T01:44:14.914653929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918475 containerd[1589]: time="2026-03-06T01:44:14.914665361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918475 containerd[1589]: time="2026-03-06T01:44:14.914679748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918475 containerd[1589]: time="2026-03-06T01:44:14.914690729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918475 containerd[1589]: time="2026-03-06T01:44:14.914703452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918475 containerd[1589]: time="2026-03-06T01:44:14.914716406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918748 containerd[1589]: time="2026-03-06T01:44:14.914726615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918748 containerd[1589]: time="2026-03-06T01:44:14.914743537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918748 containerd[1589]: time="2026-03-06T01:44:14.914754988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918748 containerd[1589]: time="2026-03-06T01:44:14.914769085Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 6 01:44:14.918748 containerd[1589]: time="2026-03-06T01:44:14.914787790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918748 containerd[1589]: time="2026-03-06T01:44:14.914799662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918748 containerd[1589]: time="2026-03-06T01:44:14.914809671Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 6 01:44:14.918748 containerd[1589]: time="2026-03-06T01:44:14.914852300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 6 01:44:14.918748 containerd[1589]: time="2026-03-06T01:44:14.914909778Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 6 01:44:14.918748 containerd[1589]: time="2026-03-06T01:44:14.914921629Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 6 01:44:14.918748 containerd[1589]: time="2026-03-06T01:44:14.914932590Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 6 01:44:14.918748 containerd[1589]: time="2026-03-06T01:44:14.914941888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.918748 containerd[1589]: time="2026-03-06T01:44:14.914956204Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 6 01:44:14.918748 containerd[1589]: time="2026-03-06T01:44:14.914972484Z" level=info msg="NRI interface is disabled by configuration." Mar 6 01:44:14.919012 containerd[1589]: time="2026-03-06T01:44:14.914987783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 6 01:44:14.919034 containerd[1589]: time="2026-03-06T01:44:14.915210269Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 6 01:44:14.919034 containerd[1589]: time="2026-03-06T01:44:14.915259881Z" level=info msg="Connect containerd service" Mar 6 01:44:14.919034 containerd[1589]: time="2026-03-06T01:44:14.915309794Z" level=info msg="using legacy CRI server" Mar 6 01:44:14.919034 containerd[1589]: time="2026-03-06T01:44:14.915320805Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 01:44:14.919034 containerd[1589]: time="2026-03-06T01:44:14.915434537Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 6 01:44:14.919034 containerd[1589]: time="2026-03-06T01:44:14.916231245Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 01:44:14.919034 containerd[1589]: time="2026-03-06T01:44:14.916614831Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 01:44:14.919034 containerd[1589]: time="2026-03-06T01:44:14.916673271Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 01:44:14.919034 containerd[1589]: time="2026-03-06T01:44:14.916718235Z" level=info msg="Start subscribing containerd event" Mar 6 01:44:14.919034 containerd[1589]: time="2026-03-06T01:44:14.916746938Z" level=info msg="Start recovering state" Mar 6 01:44:14.919034 containerd[1589]: time="2026-03-06T01:44:14.916805467Z" level=info msg="Start event monitor" Mar 6 01:44:14.919034 containerd[1589]: time="2026-03-06T01:44:14.916821237Z" level=info msg="Start snapshots syncer" Mar 6 01:44:14.919034 containerd[1589]: time="2026-03-06T01:44:14.916830645Z" level=info msg="Start cni network conf syncer for default" Mar 6 01:44:14.919034 containerd[1589]: time="2026-03-06T01:44:14.916837527Z" level=info msg="Start streaming server" Mar 6 01:44:14.919034 containerd[1589]: time="2026-03-06T01:44:14.916935641Z" level=info msg="containerd successfully booted in 0.069959s" Mar 6 01:44:14.921386 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 01:44:14.928148 systemd[1]: Started sshd@0-10.0.0.144:22-10.0.0.1:50428.service - OpenSSH per-connection server daemon (10.0.0.1:50428). Mar 6 01:44:14.936041 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 01:44:14.941760 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 01:44:14.942115 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 01:44:14.954996 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 01:44:14.988185 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 01:44:15.003407 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 01:44:15.009683 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 01:44:15.013729 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 01:44:15.047761 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 50428 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:44:15.049158 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:15.063981 systemd-logind[1567]: New session 1 of user core. Mar 6 01:44:15.065137 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 01:44:15.083808 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 01:44:15.102312 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 01:44:15.115782 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 01:44:15.130189 (systemd)[1666]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 01:44:15.190415 tar[1586]: linux-amd64/README.md Mar 6 01:44:15.205162 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 01:44:15.267944 systemd[1666]: Queued start job for default target default.target. Mar 6 01:44:15.268389 systemd[1666]: Created slice app.slice - User Application Slice. Mar 6 01:44:15.268430 systemd[1666]: Reached target paths.target - Paths. Mar 6 01:44:15.268480 systemd[1666]: Reached target timers.target - Timers. Mar 6 01:44:15.278624 systemd[1666]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 01:44:15.289132 systemd[1666]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 01:44:15.289258 systemd[1666]: Reached target sockets.target - Sockets. Mar 6 01:44:15.289276 systemd[1666]: Reached target basic.target - Basic System. Mar 6 01:44:15.289340 systemd[1666]: Reached target default.target - Main User Target. Mar 6 01:44:15.289401 systemd[1666]: Startup finished in 147ms. Mar 6 01:44:15.289851 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 01:44:15.295395 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 01:44:15.359715 systemd[1]: Started sshd@1-10.0.0.144:22-10.0.0.1:50442.service - OpenSSH per-connection server daemon (10.0.0.1:50442). Mar 6 01:44:15.391739 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 50442 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:44:15.393845 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:15.400196 systemd-logind[1567]: New session 2 of user core. Mar 6 01:44:15.410917 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 01:44:15.438304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:44:15.441631 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 01:44:15.446531 systemd[1]: Startup finished in 12.500s (kernel) + 5.147s (userspace) = 17.648s. Mar 6 01:44:15.473421 sshd[1683]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:15.479681 systemd[1]: Started sshd@2-10.0.0.144:22-10.0.0.1:50456.service - OpenSSH per-connection server daemon (10.0.0.1:50456). Mar 6 01:44:15.480592 systemd[1]: sshd@1-10.0.0.144:22-10.0.0.1:50442.service: Deactivated successfully. Mar 6 01:44:15.482154 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:44:15.483974 systemd-logind[1567]: Session 2 logged out. Waiting for processes to exit. Mar 6 01:44:15.485417 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 01:44:15.488773 systemd-logind[1567]: Removed session 2. Mar 6 01:44:15.522917 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 50456 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:44:15.524403 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:15.529318 systemd-logind[1567]: New session 3 of user core. Mar 6 01:44:15.539760 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 01:44:15.592272 sshd[1697]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:15.600720 systemd[1]: Started sshd@3-10.0.0.144:22-10.0.0.1:50464.service - OpenSSH per-connection server daemon (10.0.0.1:50464). Mar 6 01:44:15.601307 systemd[1]: sshd@2-10.0.0.144:22-10.0.0.1:50456.service: Deactivated successfully. Mar 6 01:44:15.604993 systemd-logind[1567]: Session 3 logged out. Waiting for processes to exit. Mar 6 01:44:15.605304 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 01:44:15.606950 systemd-logind[1567]: Removed session 3. Mar 6 01:44:15.634250 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 50464 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:44:15.636509 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:15.642252 systemd-logind[1567]: New session 4 of user core. Mar 6 01:44:15.652898 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 01:44:15.713540 sshd[1714]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:15.724833 systemd[1]: Started sshd@4-10.0.0.144:22-10.0.0.1:50468.service - OpenSSH per-connection server daemon (10.0.0.1:50468). Mar 6 01:44:15.725821 systemd[1]: sshd@3-10.0.0.144:22-10.0.0.1:50464.service: Deactivated successfully. Mar 6 01:44:15.728060 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 01:44:15.729048 systemd-logind[1567]: Session 4 logged out. Waiting for processes to exit. Mar 6 01:44:15.731798 systemd-logind[1567]: Removed session 4. Mar 6 01:44:15.757918 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 50468 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:44:15.760140 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:15.765679 systemd-logind[1567]: New session 5 of user core. Mar 6 01:44:15.772967 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 01:44:15.836819 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 01:44:15.837367 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:44:15.854283 sudo[1730]: pam_unix(sudo:session): session closed for user root Mar 6 01:44:15.856278 sshd[1723]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:15.862794 systemd[1]: Started sshd@5-10.0.0.144:22-10.0.0.1:50480.service - OpenSSH per-connection server daemon (10.0.0.1:50480). Mar 6 01:44:15.863524 systemd[1]: sshd@4-10.0.0.144:22-10.0.0.1:50468.service: Deactivated successfully. Mar 6 01:44:15.866153 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 01:44:15.867028 systemd-logind[1567]: Session 5 logged out. Waiting for processes to exit. Mar 6 01:44:15.869954 systemd-logind[1567]: Removed session 5. Mar 6 01:44:15.896260 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 50480 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:44:15.897993 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:15.902990 systemd-logind[1567]: New session 6 of user core. Mar 6 01:44:15.914816 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 01:44:15.938191 kubelet[1695]: E0306 01:44:15.938040 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:44:15.941814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:44:15.942152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:44:15.973660 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 01:44:15.974084 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:44:15.978774 sudo[1743]: pam_unix(sudo:session): session closed for user root Mar 6 01:44:15.986229 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 6 01:44:15.986666 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:44:16.005703 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 6 01:44:16.008039 auditctl[1746]: No rules Mar 6 01:44:16.008531 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 01:44:16.008831 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 6 01:44:16.011798 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 6 01:44:16.047438 augenrules[1765]: No rules Mar 6 01:44:16.049818 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 6 01:44:16.051124 sudo[1742]: pam_unix(sudo:session): session closed for user root Mar 6 01:44:16.053532 sshd[1732]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:16.062711 systemd[1]: Started sshd@6-10.0.0.144:22-10.0.0.1:50482.service - OpenSSH per-connection server daemon (10.0.0.1:50482). Mar 6 01:44:16.063288 systemd[1]: sshd@5-10.0.0.144:22-10.0.0.1:50480.service: Deactivated successfully. Mar 6 01:44:16.065682 systemd-logind[1567]: Session 6 logged out. Waiting for processes to exit. Mar 6 01:44:16.066626 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 01:44:16.068270 systemd-logind[1567]: Removed session 6. Mar 6 01:44:16.093238 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 50482 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:44:16.094794 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:44:16.099687 systemd-logind[1567]: New session 7 of user core. Mar 6 01:44:16.109827 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 01:44:16.165042 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 01:44:16.165501 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 01:44:16.437707 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 01:44:16.438048 (dockerd)[1796]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 01:44:16.695546 dockerd[1796]: time="2026-03-06T01:44:16.695290654Z" level=info msg="Starting up" Mar 6 01:44:16.934974 systemd[1]: var-lib-docker-metacopy\x2dcheck3245847751-merged.mount: Deactivated successfully. Mar 6 01:44:16.961522 dockerd[1796]: time="2026-03-06T01:44:16.961317943Z" level=info msg="Loading containers: start." Mar 6 01:44:17.115502 kernel: Initializing XFRM netlink socket Mar 6 01:44:17.220680 systemd-networkd[1250]: docker0: Link UP Mar 6 01:44:17.246290 dockerd[1796]: time="2026-03-06T01:44:17.246212642Z" level=info msg="Loading containers: done." Mar 6 01:44:17.268948 dockerd[1796]: time="2026-03-06T01:44:17.268855956Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 01:44:17.269103 dockerd[1796]: time="2026-03-06T01:44:17.269004403Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 6 01:44:17.269144 dockerd[1796]: time="2026-03-06T01:44:17.269128435Z" level=info msg="Daemon has completed initialization" Mar 6 01:44:17.311845 dockerd[1796]: time="2026-03-06T01:44:17.311778922Z" level=info msg="API listen on /run/docker.sock" Mar 6 01:44:17.311959 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 01:44:17.763786 containerd[1589]: time="2026-03-06T01:44:17.763716287Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 6 01:44:18.546954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1213621367.mount: Deactivated successfully. Mar 6 01:44:19.640803 containerd[1589]: time="2026-03-06T01:44:19.640715841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:19.641720 containerd[1589]: time="2026-03-06T01:44:19.641653286Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 6 01:44:19.643067 containerd[1589]: time="2026-03-06T01:44:19.642970059Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:19.646601 containerd[1589]: time="2026-03-06T01:44:19.646516369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:19.647744 containerd[1589]: time="2026-03-06T01:44:19.647696966Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 1.883925576s" Mar 6 01:44:19.647744 containerd[1589]: time="2026-03-06T01:44:19.647726711Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 6 01:44:19.648627 containerd[1589]: time="2026-03-06T01:44:19.648564857Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 6 01:44:20.825630 containerd[1589]: time="2026-03-06T01:44:20.825541670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:20.826422 containerd[1589]: time="2026-03-06T01:44:20.826368996Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 6 01:44:20.827744 containerd[1589]: time="2026-03-06T01:44:20.827544181Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:20.830846 containerd[1589]: time="2026-03-06T01:44:20.830754798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:20.831982 containerd[1589]: time="2026-03-06T01:44:20.831915483Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.183310251s" Mar 6 01:44:20.831982 containerd[1589]: time="2026-03-06T01:44:20.831956059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 6 01:44:20.832906 containerd[1589]: time="2026-03-06T01:44:20.832682592Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 6 01:44:21.848788 containerd[1589]: time="2026-03-06T01:44:21.848642056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:21.849579 containerd[1589]: time="2026-03-06T01:44:21.849527677Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 6 01:44:21.851030 containerd[1589]: time="2026-03-06T01:44:21.850984821Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:21.854311 containerd[1589]: time="2026-03-06T01:44:21.854226304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:21.855397 containerd[1589]: time="2026-03-06T01:44:21.855356654Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.022643837s" Mar 6 01:44:21.855484 containerd[1589]: time="2026-03-06T01:44:21.855403111Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 6 01:44:21.856112 containerd[1589]: time="2026-03-06T01:44:21.856071159Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 6 01:44:22.753983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4230105852.mount: Deactivated successfully. Mar 6 01:44:23.185795 containerd[1589]: time="2026-03-06T01:44:23.185588279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:23.186958 containerd[1589]: time="2026-03-06T01:44:23.186905298Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 6 01:44:23.188178 containerd[1589]: time="2026-03-06T01:44:23.188116529Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:23.190956 containerd[1589]: time="2026-03-06T01:44:23.190849749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:23.192176 containerd[1589]: time="2026-03-06T01:44:23.192095094Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.335911194s" Mar 6 01:44:23.192176 containerd[1589]: time="2026-03-06T01:44:23.192149626Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 6 01:44:23.192828 containerd[1589]: time="2026-03-06T01:44:23.192784872Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 6 01:44:23.650667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2346889342.mount: Deactivated successfully. Mar 6 01:44:24.445847 containerd[1589]: time="2026-03-06T01:44:24.445739428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:24.446671 containerd[1589]: time="2026-03-06T01:44:24.446600984Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 6 01:44:24.448176 containerd[1589]: time="2026-03-06T01:44:24.448111030Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:24.451861 containerd[1589]: time="2026-03-06T01:44:24.451797458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:24.453517 containerd[1589]: time="2026-03-06T01:44:24.453425123Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.260595377s" Mar 6 01:44:24.453587 containerd[1589]: time="2026-03-06T01:44:24.453524969Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 6 01:44:24.454490 containerd[1589]: time="2026-03-06T01:44:24.454296720Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 6 01:44:24.852696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2937844884.mount: Deactivated successfully. Mar 6 01:44:24.858971 containerd[1589]: time="2026-03-06T01:44:24.858907135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:24.859892 containerd[1589]: time="2026-03-06T01:44:24.859778951Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 6 01:44:24.861301 containerd[1589]: time="2026-03-06T01:44:24.861236485Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:24.863902 containerd[1589]: time="2026-03-06T01:44:24.863820129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:24.864845 containerd[1589]: time="2026-03-06T01:44:24.864791090Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 410.434549ms" Mar 6 01:44:24.864845 containerd[1589]: time="2026-03-06T01:44:24.864833920Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 6 01:44:24.865591 containerd[1589]: time="2026-03-06T01:44:24.865556359Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 6 01:44:25.296540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216555528.mount: Deactivated successfully. Mar 6 01:44:26.192301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 01:44:26.199608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:44:26.218685 containerd[1589]: time="2026-03-06T01:44:26.218636244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:26.219673 containerd[1589]: time="2026-03-06T01:44:26.219631964Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 6 01:44:26.228481 containerd[1589]: time="2026-03-06T01:44:26.228400096Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:26.237413 containerd[1589]: time="2026-03-06T01:44:26.237345506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:26.238721 containerd[1589]: time="2026-03-06T01:44:26.238677167Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.373082697s" Mar 6 01:44:26.238721 containerd[1589]: time="2026-03-06T01:44:26.238718204Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 6 01:44:26.375418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:44:26.391125 (kubelet)[2161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 01:44:26.449905 kubelet[2161]: E0306 01:44:26.449339 2161 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 01:44:26.456354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 01:44:26.456751 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 01:44:29.076912 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:44:29.088680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:44:29.117369 systemd[1]: Reloading requested from client PID 2195 ('systemctl') (unit session-7.scope)... Mar 6 01:44:29.117402 systemd[1]: Reloading... Mar 6 01:44:29.189534 zram_generator::config[2243]: No configuration found. Mar 6 01:44:29.299764 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:44:29.373045 systemd[1]: Reloading finished in 255 ms. Mar 6 01:44:29.421182 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 6 01:44:29.421319 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 6 01:44:29.421797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:44:29.424223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:44:29.610288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:44:29.615717 (kubelet)[2294]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:44:29.663961 kubelet[2294]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:44:29.663961 kubelet[2294]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 01:44:29.663961 kubelet[2294]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:44:29.663961 kubelet[2294]: I0306 01:44:29.663854 2294 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 01:44:29.888030 kubelet[2294]: I0306 01:44:29.887958 2294 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 01:44:29.888030 kubelet[2294]: I0306 01:44:29.887996 2294 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:44:29.888254 kubelet[2294]: I0306 01:44:29.888169 2294 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 01:44:29.909403 kubelet[2294]: E0306 01:44:29.909354 2294 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 01:44:29.912307 kubelet[2294]: I0306 01:44:29.912245 2294 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:44:29.917783 kubelet[2294]: E0306 01:44:29.917660 2294 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:44:29.917783 kubelet[2294]: I0306 01:44:29.917695 2294 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 6 01:44:29.924943 kubelet[2294]: I0306 01:44:29.924894 2294 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 01:44:29.925984 kubelet[2294]: I0306 01:44:29.925903 2294 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:44:29.926196 kubelet[2294]: I0306 01:44:29.925963 2294 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 6 01:44:29.926196 kubelet[2294]: I0306 01:44:29.926180 2294 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 01:44:29.926196 kubelet[2294]: I0306 01:44:29.926190 2294 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 01:44:29.926367 kubelet[2294]: I0306 01:44:29.926314 2294 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:44:29.929863 kubelet[2294]: I0306 01:44:29.929808 2294 kubelet.go:480] "Attempting to sync node with API server" Mar 6 01:44:29.929863 kubelet[2294]: I0306 01:44:29.929836 2294 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:44:29.929863 kubelet[2294]: I0306 01:44:29.929862 2294 kubelet.go:386] "Adding apiserver pod source" Mar 6 01:44:29.931124 kubelet[2294]: I0306 01:44:29.931107 2294 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:44:29.935130 kubelet[2294]: I0306 01:44:29.935074 2294 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:44:29.935809 kubelet[2294]: I0306 01:44:29.935754 2294 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:44:29.936771 kubelet[2294]: W0306 01:44:29.936724 2294 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 01:44:29.936827 kubelet[2294]: E0306 01:44:29.936779 2294 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 01:44:29.936914 kubelet[2294]: E0306 01:44:29.936856 2294 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 01:44:29.941641 kubelet[2294]: I0306 01:44:29.941593 2294 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 01:44:29.941641 kubelet[2294]: I0306 01:44:29.941640 2294 server.go:1289] "Started kubelet" Mar 6 01:44:29.942061 kubelet[2294]: I0306 01:44:29.941964 2294 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:44:29.943985 kubelet[2294]: I0306 01:44:29.942796 2294 server.go:317] "Adding debug handlers to kubelet server" Mar 6 01:44:29.943985 kubelet[2294]: I0306 01:44:29.943286 2294 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:44:29.943985 kubelet[2294]: I0306 01:44:29.943838 2294 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:44:29.945630 kubelet[2294]: E0306 01:44:29.944648 2294 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.144:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.144:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a1d2a175ea7b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 01:44:29.941622712 +0000 UTC m=+0.321072032,LastTimestamp:2026-03-06 01:44:29.941622712 +0000 UTC m=+0.321072032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 01:44:29.945917 kubelet[2294]: I0306 01:44:29.945805 2294 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 01:44:29.946107 kubelet[2294]: I0306 01:44:29.946057 2294 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:44:29.947149 kubelet[2294]: I0306 01:44:29.947124 2294 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 01:44:29.947228 kubelet[2294]: I0306 01:44:29.947209 2294 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 01:44:29.947276 kubelet[2294]: I0306 01:44:29.947262 2294 reconciler.go:26] "Reconciler: start to sync state" Mar 6 01:44:29.947594 kubelet[2294]: E0306 01:44:29.947551 2294 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 01:44:29.947860 kubelet[2294]: E0306 01:44:29.947722 2294 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:44:29.947860 kubelet[2294]: E0306 01:44:29.947802 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="200ms" Mar 6 01:44:29.948767 kubelet[2294]: I0306 01:44:29.948667 2294 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:44:29.948813 kubelet[2294]: E0306 01:44:29.948776 2294 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 01:44:29.948813 kubelet[2294]: I0306 01:44:29.948797 2294 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:44:29.949866 kubelet[2294]: I0306 01:44:29.949853 2294 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:44:29.973558 kubelet[2294]: I0306 01:44:29.973373 2294 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 01:44:29.976662 kubelet[2294]: I0306 01:44:29.976627 2294 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 01:44:29.976711 kubelet[2294]: I0306 01:44:29.976672 2294 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 01:44:29.976711 kubelet[2294]: I0306 01:44:29.976690 2294 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:44:29.976711 kubelet[2294]: I0306 01:44:29.976699 2294 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 01:44:29.976785 kubelet[2294]: E0306 01:44:29.976741 2294 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:44:29.977345 kubelet[2294]: E0306 01:44:29.977164 2294 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 01:44:29.978926 kubelet[2294]: I0306 01:44:29.978899 2294 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 01:44:29.978926 kubelet[2294]: I0306 01:44:29.978923 2294 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 01:44:29.978997 kubelet[2294]: I0306 01:44:29.978940 2294 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:44:30.048234 kubelet[2294]: E0306 01:44:30.048079 2294 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 6 01:44:30.066028 kubelet[2294]: I0306 01:44:30.065924 2294 policy_none.go:49] "None policy: Start" Mar 6 01:44:30.066028 kubelet[2294]: I0306 01:44:30.065988 2294 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 01:44:30.066028 kubelet[2294]: I0306 01:44:30.066010 2294 state_mem.go:35] "Initializing new in-memory state store" Mar 6 01:44:30.073221 kubelet[2294]: E0306 01:44:30.073175 2294 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:44:30.075396 kubelet[2294]: I0306 01:44:30.073381 2294 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 01:44:30.075396 kubelet[2294]: I0306 01:44:30.073396 2294 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:44:30.075396 kubelet[2294]: I0306 01:44:30.074797 2294 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 01:44:30.075654 kubelet[2294]: E0306 01:44:30.075636 2294 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:44:30.075937 kubelet[2294]: E0306 01:44:30.075861 2294 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 6 01:44:30.086791 kubelet[2294]: E0306 01:44:30.086719 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:30.089135 kubelet[2294]: E0306 01:44:30.089100 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:30.092649 kubelet[2294]: E0306 01:44:30.092616 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:30.149320 kubelet[2294]: E0306 01:44:30.149105 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="400ms" Mar 6 01:44:30.175973 kubelet[2294]: I0306 01:44:30.175750 2294 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:44:30.176764 kubelet[2294]: E0306 01:44:30.176730 2294 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Mar 6 01:44:30.249761 kubelet[2294]: I0306 01:44:30.249285 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:44:30.249761 kubelet[2294]: I0306 01:44:30.249353 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:30.249761 kubelet[2294]: I0306 01:44:30.249372 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:30.249761 kubelet[2294]: I0306 01:44:30.249387 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:30.249761 kubelet[2294]: I0306 01:44:30.249402 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:30.250134 kubelet[2294]: I0306 01:44:30.249416 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ec90db59ea64eccf6d9a9824e4381ef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ec90db59ea64eccf6d9a9824e4381ef\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:30.250134 kubelet[2294]: I0306 01:44:30.249430 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ec90db59ea64eccf6d9a9824e4381ef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ec90db59ea64eccf6d9a9824e4381ef\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:30.250134 kubelet[2294]: I0306 01:44:30.249690 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ec90db59ea64eccf6d9a9824e4381ef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3ec90db59ea64eccf6d9a9824e4381ef\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:30.250134 kubelet[2294]: I0306 01:44:30.249723 2294 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:30.378405 kubelet[2294]: I0306 01:44:30.378331 2294 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:44:30.378895 kubelet[2294]: E0306 01:44:30.378830 2294 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Mar 6 01:44:30.388197 kubelet[2294]: E0306 01:44:30.388071 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:30.388726 containerd[1589]: time="2026-03-06T01:44:30.388642983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 6 01:44:30.390327 kubelet[2294]: E0306 01:44:30.390294 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:30.390953 containerd[1589]: time="2026-03-06T01:44:30.390735517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 6 01:44:30.393304 kubelet[2294]: E0306 01:44:30.393266 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:30.393808 containerd[1589]: time="2026-03-06T01:44:30.393765753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3ec90db59ea64eccf6d9a9824e4381ef,Namespace:kube-system,Attempt:0,}" Mar 6 01:44:30.550391 kubelet[2294]: E0306 01:44:30.550031 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="800ms" Mar 6 01:44:30.780812 kubelet[2294]: I0306 01:44:30.780673 2294 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:44:30.781550 kubelet[2294]: E0306 01:44:30.781210 2294 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Mar 6 01:44:30.811650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519635409.mount: Deactivated successfully. Mar 6 01:44:30.818779 containerd[1589]: time="2026-03-06T01:44:30.818663420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:44:30.822812 containerd[1589]: time="2026-03-06T01:44:30.822634881Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 6 01:44:30.823893 containerd[1589]: time="2026-03-06T01:44:30.823768262Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:44:30.825148 containerd[1589]: time="2026-03-06T01:44:30.825093326Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:44:30.826348 containerd[1589]: time="2026-03-06T01:44:30.826231356Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:44:30.827560 containerd[1589]: time="2026-03-06T01:44:30.827432753Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:44:30.828336 containerd[1589]: time="2026-03-06T01:44:30.828292057Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 6 01:44:30.830637 containerd[1589]: time="2026-03-06T01:44:30.830568736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 01:44:30.832673 containerd[1589]: time="2026-03-06T01:44:30.832618278Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 438.774509ms" Mar 6 01:44:30.835066 containerd[1589]: time="2026-03-06T01:44:30.835022345Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 444.233989ms" Mar 6 01:44:30.836425 containerd[1589]: time="2026-03-06T01:44:30.836350432Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 447.617411ms" Mar 6 01:44:30.908954 kubelet[2294]: E0306 01:44:30.908748 2294 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 01:44:30.961504 containerd[1589]: time="2026-03-06T01:44:30.960982363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:44:30.961504 containerd[1589]: time="2026-03-06T01:44:30.961036785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:44:30.961504 containerd[1589]: time="2026-03-06T01:44:30.961050841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:30.961504 containerd[1589]: time="2026-03-06T01:44:30.961208024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:30.961977 containerd[1589]: time="2026-03-06T01:44:30.961825551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:44:30.961977 containerd[1589]: time="2026-03-06T01:44:30.961868341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:44:30.961977 containerd[1589]: time="2026-03-06T01:44:30.961917322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:30.966168 containerd[1589]: time="2026-03-06T01:44:30.965553684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:30.968167 containerd[1589]: time="2026-03-06T01:44:30.968081520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:44:30.969329 containerd[1589]: time="2026-03-06T01:44:30.969103318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:44:30.969329 containerd[1589]: time="2026-03-06T01:44:30.969122704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:30.969329 containerd[1589]: time="2026-03-06T01:44:30.969219545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:31.031619 containerd[1589]: time="2026-03-06T01:44:31.031429776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"e437fbb4f040abe8eaa9df342514b694b4a5d3fd75c1648e997f613bb54c49e4\"" Mar 6 01:44:31.033005 kubelet[2294]: E0306 01:44:31.032972 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:31.040023 containerd[1589]: time="2026-03-06T01:44:31.039984688Z" level=info msg="CreateContainer within sandbox \"e437fbb4f040abe8eaa9df342514b694b4a5d3fd75c1648e997f613bb54c49e4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 01:44:31.042287 containerd[1589]: time="2026-03-06T01:44:31.042156306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3ec90db59ea64eccf6d9a9824e4381ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"951a54e11c610310440787bca23b3d1f53d475cccdb79893b205a52f870d0cce\"" Mar 6 01:44:31.044900 kubelet[2294]: E0306 01:44:31.044752 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:31.045841 containerd[1589]: time="2026-03-06T01:44:31.045774362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"d28b372c001f1268d3821d2182febc9b296f12d0bdbe961ec08ba597f6a38a60\"" Mar 6 01:44:31.046437 kubelet[2294]: E0306 01:44:31.046410 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:31.049404 containerd[1589]: time="2026-03-06T01:44:31.049380015Z" level=info msg="CreateContainer within sandbox \"951a54e11c610310440787bca23b3d1f53d475cccdb79893b205a52f870d0cce\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 01:44:31.053269 containerd[1589]: time="2026-03-06T01:44:31.053214963Z" level=info msg="CreateContainer within sandbox \"d28b372c001f1268d3821d2182febc9b296f12d0bdbe961ec08ba597f6a38a60\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 01:44:31.065574 containerd[1589]: time="2026-03-06T01:44:31.065379251Z" level=info msg="CreateContainer within sandbox \"e437fbb4f040abe8eaa9df342514b694b4a5d3fd75c1648e997f613bb54c49e4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fe2e09a5dc0999573dcd5f8b92dbad40348fd983685839e28cd745f4e460e277\"" Mar 6 01:44:31.066174 containerd[1589]: time="2026-03-06T01:44:31.066109185Z" level=info msg="StartContainer for \"fe2e09a5dc0999573dcd5f8b92dbad40348fd983685839e28cd745f4e460e277\"" Mar 6 01:44:31.071339 containerd[1589]: time="2026-03-06T01:44:31.071310927Z" level=info msg="CreateContainer within sandbox \"951a54e11c610310440787bca23b3d1f53d475cccdb79893b205a52f870d0cce\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b3d76ad6d67aa3ea01278e82543fd37851ae47040b95f632f0ccac429fb50104\"" Mar 6 01:44:31.072196 containerd[1589]: time="2026-03-06T01:44:31.072148966Z" level=info msg="StartContainer for \"b3d76ad6d67aa3ea01278e82543fd37851ae47040b95f632f0ccac429fb50104\"" Mar 6 01:44:31.133517 containerd[1589]: time="2026-03-06T01:44:31.133382990Z" level=info msg="CreateContainer within sandbox \"d28b372c001f1268d3821d2182febc9b296f12d0bdbe961ec08ba597f6a38a60\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d2ce3aa5e65a1c60123bf9be77f86d3973c5bce1cf9d4037ebfa6e56166a81f9\"" Mar 6 01:44:31.136027 containerd[1589]: time="2026-03-06T01:44:31.134322568Z" level=info msg="StartContainer for \"d2ce3aa5e65a1c60123bf9be77f86d3973c5bce1cf9d4037ebfa6e56166a81f9\"" Mar 6 01:44:31.173804 containerd[1589]: time="2026-03-06T01:44:31.173729876Z" level=info msg="StartContainer for \"fe2e09a5dc0999573dcd5f8b92dbad40348fd983685839e28cd745f4e460e277\" returns successfully" Mar 6 01:44:31.185702 containerd[1589]: time="2026-03-06T01:44:31.185664302Z" level=info msg="StartContainer for \"b3d76ad6d67aa3ea01278e82543fd37851ae47040b95f632f0ccac429fb50104\" returns successfully" Mar 6 01:44:31.240641 containerd[1589]: time="2026-03-06T01:44:31.240555563Z" level=info msg="StartContainer for \"d2ce3aa5e65a1c60123bf9be77f86d3973c5bce1cf9d4037ebfa6e56166a81f9\" returns successfully" Mar 6 01:44:31.585687 kubelet[2294]: I0306 01:44:31.585354 2294 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:44:31.988367 kubelet[2294]: E0306 01:44:31.988125 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:31.990530 kubelet[2294]: E0306 01:44:31.989598 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:32.000926 kubelet[2294]: E0306 01:44:32.000864 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:32.001075 kubelet[2294]: E0306 01:44:32.001024 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:32.002557 kubelet[2294]: E0306 01:44:32.002533 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:32.002959 kubelet[2294]: E0306 01:44:32.002824 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:36.668179 kubelet[2294]: E0306 01:44:36.656349 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:36.668179 kubelet[2294]: E0306 01:44:36.664854 2294 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 6 01:44:36.668179 kubelet[2294]: E0306 01:44:36.657330 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:36.668179 kubelet[2294]: E0306 01:44:36.667359 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:36.919661 kubelet[2294]: E0306 01:44:36.918875 2294 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 6 01:44:36.998580 kubelet[2294]: I0306 01:44:36.998285 2294 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 01:44:36.998580 kubelet[2294]: E0306 01:44:36.998355 2294 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 6 01:44:37.006275 kubelet[2294]: E0306 01:44:37.003503 2294 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a1d2a175ea7b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 01:44:29.941622712 +0000 UTC m=+0.321072032,LastTimestamp:2026-03-06 01:44:29.941622712 +0000 UTC m=+0.321072032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 01:44:37.049836 kubelet[2294]: I0306 01:44:37.049315 2294 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:37.059698 kubelet[2294]: E0306 01:44:37.058962 2294 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a1d2a179ee01e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-06 01:44:29.945831454 +0000 UTC m=+0.325280773,LastTimestamp:2026-03-06 01:44:29.945831454 +0000 UTC m=+0.325280773,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 6 01:44:37.066161 kubelet[2294]: E0306 01:44:37.066106 2294 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:37.066161 kubelet[2294]: I0306 01:44:37.066160 2294 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:44:37.069171 kubelet[2294]: E0306 01:44:37.068816 2294 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 6 01:44:37.069171 kubelet[2294]: I0306 01:44:37.068927 2294 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:37.070504 kubelet[2294]: E0306 01:44:37.070347 2294 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:37.613129 kubelet[2294]: I0306 01:44:37.613002 2294 apiserver.go:52] "Watching apiserver" Mar 6 01:44:37.648623 kubelet[2294]: I0306 01:44:37.648403 2294 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 01:44:38.240614 kubelet[2294]: I0306 01:44:38.240547 2294 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:38.248709 kubelet[2294]: E0306 01:44:38.248266 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:38.643784 kubelet[2294]: E0306 01:44:38.642197 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:38.924079 kubelet[2294]: I0306 01:44:38.923215 2294 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:44:38.937393 kubelet[2294]: E0306 01:44:38.937223 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:39.144348 kubelet[2294]: I0306 01:44:39.144246 2294 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:39.152390 kubelet[2294]: E0306 01:44:39.152303 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:39.739976 kubelet[2294]: E0306 01:44:39.739390 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:39.739976 kubelet[2294]: E0306 01:44:39.740121 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:39.747358 systemd[1]: Reloading requested from client PID 2580 ('systemctl') (unit session-7.scope)... Mar 6 01:44:39.747482 systemd[1]: Reloading... Mar 6 01:44:39.899483 zram_generator::config[2619]: No configuration found. Mar 6 01:44:40.068981 kubelet[2294]: I0306 01:44:40.067844 2294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.067429734 podStartE2EDuration="1.067429734s" podCreationTimestamp="2026-03-06 01:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:44:40.058150687 +0000 UTC m=+10.437600006" watchObservedRunningTime="2026-03-06 01:44:40.067429734 +0000 UTC m=+10.446879053" Mar 6 01:44:40.078284 kubelet[2294]: I0306 01:44:40.077844 2294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.0778290090000002 podStartE2EDuration="2.077829009s" podCreationTimestamp="2026-03-06 01:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:44:40.077645445 +0000 UTC m=+10.457094765" watchObservedRunningTime="2026-03-06 01:44:40.077829009 +0000 UTC m=+10.457278327" Mar 6 01:44:40.088032 kubelet[2294]: I0306 01:44:40.087952 2294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.087935938 podStartE2EDuration="2.087935938s" podCreationTimestamp="2026-03-06 01:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:44:40.087673969 +0000 UTC m=+10.467123288" watchObservedRunningTime="2026-03-06 01:44:40.087935938 +0000 UTC m=+10.467385267" Mar 6 01:44:40.202185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 6 01:44:40.316663 systemd[1]: Reloading finished in 567 ms. Mar 6 01:44:40.485307 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:44:40.509593 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 01:44:40.511227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:44:40.523730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 01:44:40.906602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 01:44:40.925245 (kubelet)[2674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 01:44:41.004816 kubelet[2674]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:44:41.004816 kubelet[2674]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 01:44:41.004816 kubelet[2674]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 01:44:41.005324 kubelet[2674]: I0306 01:44:41.004842 2674 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 01:44:41.013674 kubelet[2674]: I0306 01:44:41.013601 2674 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 01:44:41.013674 kubelet[2674]: I0306 01:44:41.013639 2674 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 01:44:41.013851 kubelet[2674]: I0306 01:44:41.013823 2674 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 01:44:41.015285 kubelet[2674]: I0306 01:44:41.015219 2674 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 01:44:41.019989 kubelet[2674]: I0306 01:44:41.019935 2674 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 01:44:41.024074 kubelet[2674]: E0306 01:44:41.024037 2674 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 6 01:44:41.024074 kubelet[2674]: I0306 01:44:41.024075 2674 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 6 01:44:41.037889 kubelet[2674]: I0306 01:44:41.037715 2674 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 01:44:41.041666 kubelet[2674]: I0306 01:44:41.038498 2674 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 01:44:41.041666 kubelet[2674]: I0306 01:44:41.041531 2674 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 6 01:44:41.042676 kubelet[2674]: I0306 01:44:41.042152 2674 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 01:44:41.042676 kubelet[2674]: I0306 01:44:41.042173 2674 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 01:44:41.042676 kubelet[2674]: I0306 01:44:41.042395 2674 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:44:41.043221 kubelet[2674]: I0306 01:44:41.043181 2674 kubelet.go:480] "Attempting to sync node with API server" Mar 6 01:44:41.043277 kubelet[2674]: I0306 01:44:41.043246 2674 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 01:44:41.043754 kubelet[2674]: I0306 01:44:41.043279 2674 kubelet.go:386] "Adding apiserver pod source" Mar 6 01:44:41.043754 kubelet[2674]: I0306 01:44:41.043296 2674 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 01:44:41.068889 kubelet[2674]: I0306 01:44:41.067594 2674 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 6 01:44:41.072623 kubelet[2674]: I0306 01:44:41.072152 2674 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 01:44:41.086576 kubelet[2674]: I0306 01:44:41.086553 2674 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 01:44:41.086859 kubelet[2674]: I0306 01:44:41.086846 2674 server.go:1289] "Started kubelet" Mar 6 01:44:41.090281 kubelet[2674]: I0306 01:44:41.090017 2674 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 01:44:41.091188 kubelet[2674]: I0306 01:44:41.090931 2674 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 01:44:41.091258 kubelet[2674]: I0306 01:44:41.091228 2674 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 01:44:41.091351 kubelet[2674]: I0306 01:44:41.091299 2674 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 01:44:41.094417 kubelet[2674]: E0306 01:44:41.094319 2674 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 01:44:41.094837 kubelet[2674]: I0306 01:44:41.094808 2674 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 01:44:41.096246 kubelet[2674]: I0306 01:44:41.094948 2674 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 01:44:41.096246 kubelet[2674]: I0306 01:44:41.095072 2674 reconciler.go:26] "Reconciler: start to sync state" Mar 6 01:44:41.098943 kubelet[2674]: I0306 01:44:41.098535 2674 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 01:44:41.105312 kubelet[2674]: I0306 01:44:41.105170 2674 factory.go:223] Registration of the systemd container factory successfully Mar 6 01:44:41.105312 kubelet[2674]: I0306 01:44:41.105397 2674 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 01:44:41.108669 kubelet[2674]: I0306 01:44:41.108388 2674 server.go:317] "Adding debug handlers to kubelet server" Mar 6 01:44:41.129495 kubelet[2674]: I0306 01:44:41.125272 2674 factory.go:223] Registration of the containerd container factory successfully Mar 6 01:44:41.195768 kubelet[2674]: I0306 01:44:41.195002 2674 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 01:44:41.198483 kubelet[2674]: I0306 01:44:41.198208 2674 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 01:44:41.198483 kubelet[2674]: I0306 01:44:41.198253 2674 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 01:44:41.198483 kubelet[2674]: I0306 01:44:41.198275 2674 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 01:44:41.198483 kubelet[2674]: I0306 01:44:41.198283 2674 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 01:44:41.198483 kubelet[2674]: E0306 01:44:41.198333 2674 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 01:44:41.270025 kubelet[2674]: I0306 01:44:41.269845 2674 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 01:44:41.270735 kubelet[2674]: I0306 01:44:41.270538 2674 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 01:44:41.270735 kubelet[2674]: I0306 01:44:41.270563 2674 state_mem.go:36] "Initialized new in-memory state store" Mar 6 01:44:41.270886 kubelet[2674]: I0306 01:44:41.270816 2674 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 6 01:44:41.270886 kubelet[2674]: I0306 01:44:41.270827 2674 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 6 01:44:41.270886 kubelet[2674]: I0306 01:44:41.270848 2674 policy_none.go:49] "None policy: Start" Mar 6 01:44:41.270886 kubelet[2674]: I0306 01:44:41.270858 2674 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 01:44:41.270886 kubelet[2674]: I0306 01:44:41.270869 2674 state_mem.go:35] "Initializing new in-memory state store" Mar 6 01:44:41.270999 kubelet[2674]: I0306 01:44:41.270992 2674 state_mem.go:75] "Updated machine memory state" Mar 6 01:44:41.273201 kubelet[2674]: E0306 01:44:41.273170 2674 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 01:44:41.273555 kubelet[2674]: I0306 01:44:41.273541 2674 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 01:44:41.273717 kubelet[2674]: I0306 01:44:41.273684 2674 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 01:44:41.274126 kubelet[2674]: I0306 01:44:41.274080 2674 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 01:44:41.277099 kubelet[2674]: E0306 01:44:41.277012 2674 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 01:44:41.301025 kubelet[2674]: I0306 01:44:41.300112 2674 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:41.301025 kubelet[2674]: I0306 01:44:41.300408 2674 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:41.301025 kubelet[2674]: I0306 01:44:41.300546 2674 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:44:41.311272 kubelet[2674]: E0306 01:44:41.310428 2674 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:41.311880 kubelet[2674]: E0306 01:44:41.311820 2674 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:41.312014 kubelet[2674]: E0306 01:44:41.311962 2674 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 6 01:44:41.396607 kubelet[2674]: I0306 01:44:41.396057 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:41.396607 kubelet[2674]: I0306 01:44:41.396113 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:41.396607 kubelet[2674]: I0306 01:44:41.396145 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 6 01:44:41.396607 kubelet[2674]: I0306 01:44:41.396165 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ec90db59ea64eccf6d9a9824e4381ef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ec90db59ea64eccf6d9a9824e4381ef\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:41.396607 kubelet[2674]: I0306 01:44:41.396277 2674 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 6 01:44:41.396607 kubelet[2674]: I0306 01:44:41.396294 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:41.396982 kubelet[2674]: I0306 01:44:41.396335 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:41.396982 kubelet[2674]: I0306 01:44:41.396374 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 6 01:44:41.396982 kubelet[2674]: I0306 01:44:41.396394 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ec90db59ea64eccf6d9a9824e4381ef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3ec90db59ea64eccf6d9a9824e4381ef\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:41.396982 kubelet[2674]: I0306 01:44:41.396415 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ec90db59ea64eccf6d9a9824e4381ef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3ec90db59ea64eccf6d9a9824e4381ef\") " pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:41.409242 kubelet[2674]: I0306 01:44:41.409177 2674 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 6 01:44:41.409684 kubelet[2674]: I0306 01:44:41.409564 2674 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 6 01:44:41.655091 kubelet[2674]: E0306 01:44:41.626124 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:41.655091 kubelet[2674]: E0306 01:44:41.626255 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:41.655091 kubelet[2674]: E0306 01:44:41.626541 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:42.047926 kubelet[2674]: I0306 01:44:42.046678 2674 apiserver.go:52] "Watching apiserver" Mar 6 01:44:42.097794 kubelet[2674]: I0306 01:44:42.097160 2674 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 01:44:42.227528 kubelet[2674]: E0306 01:44:42.226353 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:42.241289 kubelet[2674]: I0306 01:44:42.240872 2674 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:42.272290 kubelet[2674]: I0306 01:44:42.271689 2674 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 6 01:44:42.419425 kubelet[2674]: E0306 01:44:42.414186 2674 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 6 01:44:42.419425 kubelet[2674]: E0306 01:44:42.414213 2674 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 6 01:44:42.419425 kubelet[2674]: E0306 01:44:42.414987 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:42.419425 kubelet[2674]: E0306 01:44:42.415343 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:43.287739 kubelet[2674]: E0306 01:44:43.287596 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:43.289670 kubelet[2674]: E0306 01:44:43.289497 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:43.875708 kubelet[2674]: I0306 01:44:43.875401 2674 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 01:44:43.880171 containerd[1589]: time="2026-03-06T01:44:43.879524200Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 01:44:43.880917 kubelet[2674]: I0306 01:44:43.879838 2674 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 01:44:44.292230 kubelet[2674]: E0306 01:44:44.290392 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:44.749822 kubelet[2674]: I0306 01:44:44.748633 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d418df65-05f3-4100-b3fa-5848677d4ae1-kube-proxy\") pod \"kube-proxy-6xd6q\" (UID: \"d418df65-05f3-4100-b3fa-5848677d4ae1\") " pod="kube-system/kube-proxy-6xd6q" Mar 6 01:44:44.749822 kubelet[2674]: I0306 01:44:44.748724 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mclgk\" (UniqueName: \"kubernetes.io/projected/d418df65-05f3-4100-b3fa-5848677d4ae1-kube-api-access-mclgk\") pod \"kube-proxy-6xd6q\" (UID: \"d418df65-05f3-4100-b3fa-5848677d4ae1\") " pod="kube-system/kube-proxy-6xd6q" Mar 6 01:44:44.749822 kubelet[2674]: I0306 01:44:44.748746 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d418df65-05f3-4100-b3fa-5848677d4ae1-xtables-lock\") pod \"kube-proxy-6xd6q\" (UID: \"d418df65-05f3-4100-b3fa-5848677d4ae1\") " pod="kube-system/kube-proxy-6xd6q" Mar 6 01:44:44.749822 kubelet[2674]: I0306 01:44:44.748763 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d418df65-05f3-4100-b3fa-5848677d4ae1-lib-modules\") pod \"kube-proxy-6xd6q\" (UID: \"d418df65-05f3-4100-b3fa-5848677d4ae1\") " pod="kube-system/kube-proxy-6xd6q" Mar 6 01:44:44.993345 kubelet[2674]: E0306 01:44:44.989417 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:45.016541 containerd[1589]: time="2026-03-06T01:44:45.008656913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xd6q,Uid:d418df65-05f3-4100-b3fa-5848677d4ae1,Namespace:kube-system,Attempt:0,}" Mar 6 01:44:45.084622 containerd[1589]: time="2026-03-06T01:44:45.084239234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:44:45.084622 containerd[1589]: time="2026-03-06T01:44:45.084306199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:44:45.084622 containerd[1589]: time="2026-03-06T01:44:45.084318331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:45.084622 containerd[1589]: time="2026-03-06T01:44:45.084506763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:45.154726 containerd[1589]: time="2026-03-06T01:44:45.154675944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xd6q,Uid:d418df65-05f3-4100-b3fa-5848677d4ae1,Namespace:kube-system,Attempt:0,} returns sandbox id \"db9dbb9d99035ab782b34d5376ada212b681bc4e9dc2ee63085f84662418c799\"" Mar 6 01:44:45.155764 kubelet[2674]: E0306 01:44:45.155710 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:45.161861 containerd[1589]: time="2026-03-06T01:44:45.161768814Z" level=info msg="CreateContainer within sandbox \"db9dbb9d99035ab782b34d5376ada212b681bc4e9dc2ee63085f84662418c799\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 01:44:45.168030 kubelet[2674]: I0306 01:44:45.167321 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7ec78193-0fad-43c0-83db-66720487e683-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-tb7vq\" (UID: \"7ec78193-0fad-43c0-83db-66720487e683\") " pod="tigera-operator/tigera-operator-6bf85f8dd-tb7vq" Mar 6 01:44:45.168030 kubelet[2674]: I0306 01:44:45.167374 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsndt\" (UniqueName: \"kubernetes.io/projected/7ec78193-0fad-43c0-83db-66720487e683-kube-api-access-rsndt\") pod \"tigera-operator-6bf85f8dd-tb7vq\" (UID: \"7ec78193-0fad-43c0-83db-66720487e683\") " pod="tigera-operator/tigera-operator-6bf85f8dd-tb7vq" Mar 6 01:44:45.184062 containerd[1589]: time="2026-03-06T01:44:45.183991503Z" level=info msg="CreateContainer within sandbox \"db9dbb9d99035ab782b34d5376ada212b681bc4e9dc2ee63085f84662418c799\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"363eedf010569603ddd610a4a837ded832727411331f12cfe974029f38e39815\"" Mar 6 01:44:45.185158 containerd[1589]: time="2026-03-06T01:44:45.185063650Z" level=info msg="StartContainer for \"363eedf010569603ddd610a4a837ded832727411331f12cfe974029f38e39815\"" Mar 6 01:44:45.346335 containerd[1589]: time="2026-03-06T01:44:45.344748527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-tb7vq,Uid:7ec78193-0fad-43c0-83db-66720487e683,Namespace:tigera-operator,Attempt:0,}" Mar 6 01:44:45.365321 containerd[1589]: time="2026-03-06T01:44:45.365242742Z" level=info msg="StartContainer for \"363eedf010569603ddd610a4a837ded832727411331f12cfe974029f38e39815\" returns successfully" Mar 6 01:44:45.410877 containerd[1589]: time="2026-03-06T01:44:45.410704396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:44:45.410877 containerd[1589]: time="2026-03-06T01:44:45.410767263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:44:45.410877 containerd[1589]: time="2026-03-06T01:44:45.410777662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:45.411105 containerd[1589]: time="2026-03-06T01:44:45.410894030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:45.700747 containerd[1589]: time="2026-03-06T01:44:45.699346167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-tb7vq,Uid:7ec78193-0fad-43c0-83db-66720487e683,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3a00f8d1d23928d5ad22431f3c30bfe5df4f27b289d368f0ff8bd1d0d9aad533\"" Mar 6 01:44:45.702278 containerd[1589]: time="2026-03-06T01:44:45.701647364Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 6 01:44:46.332918 kubelet[2674]: E0306 01:44:46.332765 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:46.347921 kubelet[2674]: I0306 01:44:46.347820 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6xd6q" podStartSLOduration=2.34779755 podStartE2EDuration="2.34779755s" podCreationTimestamp="2026-03-06 01:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:44:46.347025541 +0000 UTC m=+5.413753689" watchObservedRunningTime="2026-03-06 01:44:46.34779755 +0000 UTC m=+5.414525688" Mar 6 01:44:46.597869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518291532.mount: Deactivated successfully. Mar 6 01:44:46.735116 kubelet[2674]: E0306 01:44:46.735048 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:47.335339 kubelet[2674]: E0306 01:44:47.335120 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:47.337204 kubelet[2674]: E0306 01:44:47.337167 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:47.417933 containerd[1589]: time="2026-03-06T01:44:47.417703370Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:47.419006 containerd[1589]: time="2026-03-06T01:44:47.418829861Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 6 01:44:47.420232 containerd[1589]: time="2026-03-06T01:44:47.420155986Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:47.422641 containerd[1589]: time="2026-03-06T01:44:47.422578698Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:47.423749 containerd[1589]: time="2026-03-06T01:44:47.423683718Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.722007751s" Mar 6 01:44:47.426491 containerd[1589]: time="2026-03-06T01:44:47.424060436Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 6 01:44:47.432960 containerd[1589]: time="2026-03-06T01:44:47.432864139Z" level=info msg="CreateContainer within sandbox \"3a00f8d1d23928d5ad22431f3c30bfe5df4f27b289d368f0ff8bd1d0d9aad533\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 6 01:44:47.453921 containerd[1589]: time="2026-03-06T01:44:47.453833789Z" level=info msg="CreateContainer within sandbox \"3a00f8d1d23928d5ad22431f3c30bfe5df4f27b289d368f0ff8bd1d0d9aad533\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"292a683a27593752d411288aeae89718f898fba70495d5923b72adfad97e4fe3\"" Mar 6 01:44:47.454636 containerd[1589]: time="2026-03-06T01:44:47.454579958Z" level=info msg="StartContainer for \"292a683a27593752d411288aeae89718f898fba70495d5923b72adfad97e4fe3\"" Mar 6 01:44:47.488055 systemd[1]: run-containerd-runc-k8s.io-292a683a27593752d411288aeae89718f898fba70495d5923b72adfad97e4fe3-runc.dwzRnu.mount: Deactivated successfully. Mar 6 01:44:47.664537 containerd[1589]: time="2026-03-06T01:44:47.663114597Z" level=info msg="StartContainer for \"292a683a27593752d411288aeae89718f898fba70495d5923b72adfad97e4fe3\" returns successfully" Mar 6 01:44:48.282364 kubelet[2674]: E0306 01:44:48.282219 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:48.338271 kubelet[2674]: E0306 01:44:48.338119 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:48.363630 kubelet[2674]: I0306 01:44:48.363541 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-tb7vq" podStartSLOduration=2.63741124 podStartE2EDuration="4.363525008s" podCreationTimestamp="2026-03-06 01:44:44 +0000 UTC" firstStartedPulling="2026-03-06 01:44:45.701203756 +0000 UTC m=+4.767931892" lastFinishedPulling="2026-03-06 01:44:47.427317523 +0000 UTC m=+6.494045660" observedRunningTime="2026-03-06 01:44:48.363299142 +0000 UTC m=+7.430027278" watchObservedRunningTime="2026-03-06 01:44:48.363525008 +0000 UTC m=+7.430253166" Mar 6 01:44:51.828845 kubelet[2674]: E0306 01:44:51.828738 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:52.376872 kubelet[2674]: E0306 01:44:52.376781 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:54.056358 sudo[1778]: pam_unix(sudo:session): session closed for user root Mar 6 01:44:54.074693 sshd[1771]: pam_unix(sshd:session): session closed for user core Mar 6 01:44:54.081888 systemd[1]: sshd@6-10.0.0.144:22-10.0.0.1:50482.service: Deactivated successfully. Mar 6 01:44:54.103587 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 01:44:54.107150 systemd-logind[1567]: Session 7 logged out. Waiting for processes to exit. Mar 6 01:44:54.110595 systemd-logind[1567]: Removed session 7. Mar 6 01:44:56.460602 kubelet[2674]: E0306 01:44:56.460407 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rb5xg" podUID="71a31f04-c706-4601-874f-daa9f7b58ab6" Mar 6 01:44:56.481163 kubelet[2674]: I0306 01:44:56.481039 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/78a64cf1-2b9e-40a2-b09a-66408ee2a47a-node-certs\") pod \"calico-node-7wpdt\" (UID: \"78a64cf1-2b9e-40a2-b09a-66408ee2a47a\") " pod="calico-system/calico-node-7wpdt" Mar 6 01:44:56.482363 kubelet[2674]: I0306 01:44:56.482272 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/78a64cf1-2b9e-40a2-b09a-66408ee2a47a-policysync\") pod \"calico-node-7wpdt\" (UID: \"78a64cf1-2b9e-40a2-b09a-66408ee2a47a\") " pod="calico-system/calico-node-7wpdt" Mar 6 01:44:56.482363 kubelet[2674]: I0306 01:44:56.482336 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78a64cf1-2b9e-40a2-b09a-66408ee2a47a-xtables-lock\") pod \"calico-node-7wpdt\" (UID: \"78a64cf1-2b9e-40a2-b09a-66408ee2a47a\") " pod="calico-system/calico-node-7wpdt" Mar 6 01:44:56.482363 kubelet[2674]: I0306 01:44:56.482365 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/78a64cf1-2b9e-40a2-b09a-66408ee2a47a-cni-net-dir\") pod \"calico-node-7wpdt\" (UID: \"78a64cf1-2b9e-40a2-b09a-66408ee2a47a\") " pod="calico-system/calico-node-7wpdt" Mar 6 01:44:56.482966 kubelet[2674]: I0306 01:44:56.482388 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78a64cf1-2b9e-40a2-b09a-66408ee2a47a-tigera-ca-bundle\") pod \"calico-node-7wpdt\" (UID: \"78a64cf1-2b9e-40a2-b09a-66408ee2a47a\") " pod="calico-system/calico-node-7wpdt" Mar 6 01:44:56.482966 kubelet[2674]: I0306 01:44:56.482415 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/78a64cf1-2b9e-40a2-b09a-66408ee2a47a-cni-log-dir\") pod \"calico-node-7wpdt\" (UID: \"78a64cf1-2b9e-40a2-b09a-66408ee2a47a\") " pod="calico-system/calico-node-7wpdt" Mar 6 01:44:56.482966 kubelet[2674]: I0306 01:44:56.482502 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78a64cf1-2b9e-40a2-b09a-66408ee2a47a-lib-modules\") pod \"calico-node-7wpdt\" (UID: \"78a64cf1-2b9e-40a2-b09a-66408ee2a47a\") " pod="calico-system/calico-node-7wpdt" Mar 6 01:44:56.482966 kubelet[2674]: I0306 01:44:56.482527 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/78a64cf1-2b9e-40a2-b09a-66408ee2a47a-sys-fs\") pod \"calico-node-7wpdt\" (UID: \"78a64cf1-2b9e-40a2-b09a-66408ee2a47a\") " pod="calico-system/calico-node-7wpdt" Mar 6 01:44:56.482966 kubelet[2674]: I0306 01:44:56.482573 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9sff\" (UniqueName: \"kubernetes.io/projected/78a64cf1-2b9e-40a2-b09a-66408ee2a47a-kube-api-access-m9sff\") pod \"calico-node-7wpdt\" (UID: \"78a64cf1-2b9e-40a2-b09a-66408ee2a47a\") " pod="calico-system/calico-node-7wpdt" Mar 6 01:44:56.483273 kubelet[2674]: I0306 01:44:56.482607 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a35bcc63-2ccc-4bbc-b35d-acf08a3f5cbc-typha-certs\") pod \"calico-typha-5c76fdc545-qsr8p\" (UID: \"a35bcc63-2ccc-4bbc-b35d-acf08a3f5cbc\") " pod="calico-system/calico-typha-5c76fdc545-qsr8p" Mar 6 01:44:56.483273 kubelet[2674]: I0306 01:44:56.482632 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/78a64cf1-2b9e-40a2-b09a-66408ee2a47a-nodeproc\") pod \"calico-node-7wpdt\" (UID: \"78a64cf1-2b9e-40a2-b09a-66408ee2a47a\") " pod="calico-system/calico-node-7wpdt" Mar 6 01:44:56.483273 kubelet[2674]: I0306 01:44:56.482653 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/78a64cf1-2b9e-40a2-b09a-66408ee2a47a-cni-bin-dir\") pod \"calico-node-7wpdt\" (UID: \"78a64cf1-2b9e-40a2-b09a-66408ee2a47a\") " pod="calico-system/calico-node-7wpdt" Mar 6 01:44:56.483273 kubelet[2674]: I0306 01:44:56.482693 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a35bcc63-2ccc-4bbc-b35d-acf08a3f5cbc-tigera-ca-bundle\") pod \"calico-typha-5c76fdc545-qsr8p\" (UID: \"a35bcc63-2ccc-4bbc-b35d-acf08a3f5cbc\") " pod="calico-system/calico-typha-5c76fdc545-qsr8p" Mar 6 01:44:56.483273 kubelet[2674]: I0306 01:44:56.482712 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/78a64cf1-2b9e-40a2-b09a-66408ee2a47a-var-lib-calico\") pod \"calico-node-7wpdt\" (UID: \"78a64cf1-2b9e-40a2-b09a-66408ee2a47a\") " pod="calico-system/calico-node-7wpdt" Mar 6 01:44:56.483541 kubelet[2674]: I0306 01:44:56.482734 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/78a64cf1-2b9e-40a2-b09a-66408ee2a47a-var-run-calico\") pod \"calico-node-7wpdt\" (UID: \"78a64cf1-2b9e-40a2-b09a-66408ee2a47a\") " pod="calico-system/calico-node-7wpdt" Mar 6 01:44:56.483541 kubelet[2674]: I0306 01:44:56.482757 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/78a64cf1-2b9e-40a2-b09a-66408ee2a47a-bpffs\") pod \"calico-node-7wpdt\" (UID: \"78a64cf1-2b9e-40a2-b09a-66408ee2a47a\") " pod="calico-system/calico-node-7wpdt" Mar 6 01:44:56.484743 kubelet[2674]: I0306 01:44:56.484644 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/78a64cf1-2b9e-40a2-b09a-66408ee2a47a-flexvol-driver-host\") pod \"calico-node-7wpdt\" (UID: \"78a64cf1-2b9e-40a2-b09a-66408ee2a47a\") " pod="calico-system/calico-node-7wpdt" Mar 6 01:44:56.484818 kubelet[2674]: I0306 01:44:56.484778 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwltv\" (UniqueName: \"kubernetes.io/projected/a35bcc63-2ccc-4bbc-b35d-acf08a3f5cbc-kube-api-access-mwltv\") pod \"calico-typha-5c76fdc545-qsr8p\" (UID: \"a35bcc63-2ccc-4bbc-b35d-acf08a3f5cbc\") " pod="calico-system/calico-typha-5c76fdc545-qsr8p" Mar 6 01:44:56.585594 kubelet[2674]: I0306 01:44:56.585523 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/71a31f04-c706-4601-874f-daa9f7b58ab6-registration-dir\") pod \"csi-node-driver-rb5xg\" (UID: \"71a31f04-c706-4601-874f-daa9f7b58ab6\") " pod="calico-system/csi-node-driver-rb5xg" Mar 6 01:44:56.585778 kubelet[2674]: I0306 01:44:56.585624 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/71a31f04-c706-4601-874f-daa9f7b58ab6-varrun\") pod \"csi-node-driver-rb5xg\" (UID: \"71a31f04-c706-4601-874f-daa9f7b58ab6\") " pod="calico-system/csi-node-driver-rb5xg" Mar 6 01:44:56.585778 kubelet[2674]: I0306 01:44:56.585657 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb9kn\" (UniqueName: \"kubernetes.io/projected/71a31f04-c706-4601-874f-daa9f7b58ab6-kube-api-access-mb9kn\") pod \"csi-node-driver-rb5xg\" (UID: \"71a31f04-c706-4601-874f-daa9f7b58ab6\") " pod="calico-system/csi-node-driver-rb5xg" Mar 6 01:44:56.585880 kubelet[2674]: I0306 01:44:56.585785 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71a31f04-c706-4601-874f-daa9f7b58ab6-kubelet-dir\") pod \"csi-node-driver-rb5xg\" (UID: \"71a31f04-c706-4601-874f-daa9f7b58ab6\") " pod="calico-system/csi-node-driver-rb5xg" Mar 6 01:44:56.585936 kubelet[2674]: I0306 01:44:56.585908 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/71a31f04-c706-4601-874f-daa9f7b58ab6-socket-dir\") pod \"csi-node-driver-rb5xg\" (UID: \"71a31f04-c706-4601-874f-daa9f7b58ab6\") " pod="calico-system/csi-node-driver-rb5xg" Mar 6 01:44:56.592393 kubelet[2674]: E0306 01:44:56.592292 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.592836 kubelet[2674]: W0306 01:44:56.592329 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.592836 kubelet[2674]: E0306 01:44:56.592661 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.593255 kubelet[2674]: E0306 01:44:56.593138 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.593255 kubelet[2674]: W0306 01:44:56.593151 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.593255 kubelet[2674]: E0306 01:44:56.593161 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.593593 kubelet[2674]: E0306 01:44:56.593579 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.593662 kubelet[2674]: W0306 01:44:56.593650 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.593708 kubelet[2674]: E0306 01:44:56.593697 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.594385 kubelet[2674]: E0306 01:44:56.594283 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.594385 kubelet[2674]: W0306 01:44:56.594296 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.594385 kubelet[2674]: E0306 01:44:56.594308 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.596516 kubelet[2674]: E0306 01:44:56.595536 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.596516 kubelet[2674]: W0306 01:44:56.595642 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.596516 kubelet[2674]: E0306 01:44:56.595674 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.597970 kubelet[2674]: E0306 01:44:56.597693 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.597970 kubelet[2674]: W0306 01:44:56.597776 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.597970 kubelet[2674]: E0306 01:44:56.597796 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.599154 kubelet[2674]: E0306 01:44:56.599036 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.599154 kubelet[2674]: W0306 01:44:56.599054 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.599507 kubelet[2674]: E0306 01:44:56.599423 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.602916 kubelet[2674]: E0306 01:44:56.601902 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.602916 kubelet[2674]: W0306 01:44:56.601921 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.602916 kubelet[2674]: E0306 01:44:56.601938 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.602916 kubelet[2674]: E0306 01:44:56.602746 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.602916 kubelet[2674]: W0306 01:44:56.602760 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.602916 kubelet[2674]: E0306 01:44:56.602773 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.603261 kubelet[2674]: E0306 01:44:56.603114 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.603261 kubelet[2674]: W0306 01:44:56.603127 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.603261 kubelet[2674]: E0306 01:44:56.603139 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.604329 kubelet[2674]: E0306 01:44:56.604294 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.604329 kubelet[2674]: W0306 01:44:56.604322 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.604513 kubelet[2674]: E0306 01:44:56.604338 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.614349 kubelet[2674]: E0306 01:44:56.614251 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.614349 kubelet[2674]: W0306 01:44:56.614277 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.614856 kubelet[2674]: E0306 01:44:56.614738 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.647115 containerd[1589]: time="2026-03-06T01:44:56.647022920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7wpdt,Uid:78a64cf1-2b9e-40a2-b09a-66408ee2a47a,Namespace:calico-system,Attempt:0,}" Mar 6 01:44:56.687600 kubelet[2674]: E0306 01:44:56.687545 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.688023 kubelet[2674]: W0306 01:44:56.687721 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.688285 kubelet[2674]: E0306 01:44:56.687752 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.688811 kubelet[2674]: E0306 01:44:56.688796 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.689009 kubelet[2674]: W0306 01:44:56.688936 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.689009 kubelet[2674]: E0306 01:44:56.688953 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.689809 kubelet[2674]: E0306 01:44:56.689738 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.690316 kubelet[2674]: W0306 01:44:56.689887 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.690316 kubelet[2674]: E0306 01:44:56.689908 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.690966 kubelet[2674]: E0306 01:44:56.690950 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.691175 kubelet[2674]: W0306 01:44:56.691033 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.691175 kubelet[2674]: E0306 01:44:56.691051 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.691532 containerd[1589]: time="2026-03-06T01:44:56.691036277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:44:56.691532 containerd[1589]: time="2026-03-06T01:44:56.691220439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:44:56.691532 containerd[1589]: time="2026-03-06T01:44:56.691241377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:56.691988 containerd[1589]: time="2026-03-06T01:44:56.691421501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:56.693418 kubelet[2674]: E0306 01:44:56.693254 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.693554 kubelet[2674]: W0306 01:44:56.693525 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.693741 kubelet[2674]: E0306 01:44:56.693656 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.695850 kubelet[2674]: E0306 01:44:56.695783 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.695904 kubelet[2674]: W0306 01:44:56.695870 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.695904 kubelet[2674]: E0306 01:44:56.695895 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.696489 kubelet[2674]: E0306 01:44:56.696397 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.696489 kubelet[2674]: W0306 01:44:56.696436 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.696595 kubelet[2674]: E0306 01:44:56.696497 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.697044 kubelet[2674]: E0306 01:44:56.696979 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.697161 kubelet[2674]: W0306 01:44:56.697109 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.697269 kubelet[2674]: E0306 01:44:56.697222 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.698240 kubelet[2674]: E0306 01:44:56.698122 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.698240 kubelet[2674]: W0306 01:44:56.698226 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.698320 kubelet[2674]: E0306 01:44:56.698244 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.699933 kubelet[2674]: E0306 01:44:56.698995 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.699933 kubelet[2674]: W0306 01:44:56.699016 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.699933 kubelet[2674]: E0306 01:44:56.699031 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.699933 kubelet[2674]: E0306 01:44:56.699762 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.699933 kubelet[2674]: W0306 01:44:56.699776 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.699933 kubelet[2674]: E0306 01:44:56.699791 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.700744 kubelet[2674]: E0306 01:44:56.700430 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.700744 kubelet[2674]: W0306 01:44:56.700517 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.700744 kubelet[2674]: E0306 01:44:56.700534 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.700939 kubelet[2674]: E0306 01:44:56.700907 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.700979 kubelet[2674]: W0306 01:44:56.700939 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.701027 kubelet[2674]: E0306 01:44:56.701006 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.701566 kubelet[2674]: E0306 01:44:56.701534 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.701603 kubelet[2674]: W0306 01:44:56.701566 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.701603 kubelet[2674]: E0306 01:44:56.701582 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.702058 kubelet[2674]: E0306 01:44:56.702003 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.702058 kubelet[2674]: W0306 01:44:56.702042 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.702152 kubelet[2674]: E0306 01:44:56.702057 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.702563 kubelet[2674]: E0306 01:44:56.702533 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.702608 kubelet[2674]: W0306 01:44:56.702564 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.702608 kubelet[2674]: E0306 01:44:56.702581 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.702944 kubelet[2674]: E0306 01:44:56.702911 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.702944 kubelet[2674]: W0306 01:44:56.702943 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.703012 kubelet[2674]: E0306 01:44:56.702958 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.703435 kubelet[2674]: E0306 01:44:56.703378 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.703435 kubelet[2674]: W0306 01:44:56.703418 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.703644 kubelet[2674]: E0306 01:44:56.703435 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.704047 kubelet[2674]: E0306 01:44:56.703994 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.704047 kubelet[2674]: W0306 01:44:56.704032 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.704047 kubelet[2674]: E0306 01:44:56.704047 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.704823 kubelet[2674]: E0306 01:44:56.704619 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.704823 kubelet[2674]: W0306 01:44:56.704635 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.704823 kubelet[2674]: E0306 01:44:56.704649 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.705294 kubelet[2674]: E0306 01:44:56.705236 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.705294 kubelet[2674]: W0306 01:44:56.705279 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.705369 kubelet[2674]: E0306 01:44:56.705297 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.706023 kubelet[2674]: E0306 01:44:56.705980 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.706023 kubelet[2674]: W0306 01:44:56.706013 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.706182 kubelet[2674]: E0306 01:44:56.706031 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.706745 kubelet[2674]: E0306 01:44:56.706692 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.706745 kubelet[2674]: W0306 01:44:56.706725 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.706745 kubelet[2674]: E0306 01:44:56.706741 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.707373 kubelet[2674]: E0306 01:44:56.707333 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.707373 kubelet[2674]: W0306 01:44:56.707364 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.707544 kubelet[2674]: E0306 01:44:56.707380 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.707913 kubelet[2674]: E0306 01:44:56.707873 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.707913 kubelet[2674]: W0306 01:44:56.707903 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.708000 kubelet[2674]: E0306 01:44:56.707918 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.723560 kubelet[2674]: E0306 01:44:56.722792 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 01:44:56.723560 kubelet[2674]: W0306 01:44:56.722824 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 01:44:56.723560 kubelet[2674]: E0306 01:44:56.722840 2674 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 01:44:56.766513 containerd[1589]: time="2026-03-06T01:44:56.766292949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7wpdt,Uid:78a64cf1-2b9e-40a2-b09a-66408ee2a47a,Namespace:calico-system,Attempt:0,} returns sandbox id \"08e6c079794a866b95938b3a56830bf3bb3a5267346d24225522a9ef17a96112\"" Mar 6 01:44:56.768516 containerd[1589]: time="2026-03-06T01:44:56.768493173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 6 01:44:56.897694 kubelet[2674]: E0306 01:44:56.897607 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:56.898434 containerd[1589]: time="2026-03-06T01:44:56.898399125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c76fdc545-qsr8p,Uid:a35bcc63-2ccc-4bbc-b35d-acf08a3f5cbc,Namespace:calico-system,Attempt:0,}" Mar 6 01:44:56.944031 containerd[1589]: time="2026-03-06T01:44:56.943794373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:44:56.944031 containerd[1589]: time="2026-03-06T01:44:56.943883843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:44:56.944031 containerd[1589]: time="2026-03-06T01:44:56.943899492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:56.944427 containerd[1589]: time="2026-03-06T01:44:56.944038402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:44:57.039513 containerd[1589]: time="2026-03-06T01:44:57.039263423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c76fdc545-qsr8p,Uid:a35bcc63-2ccc-4bbc-b35d-acf08a3f5cbc,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4448b5334389456bdf179f856a8256bbd5234dee89791fc283afd989dfb6bae\"" Mar 6 01:44:57.041255 kubelet[2674]: E0306 01:44:57.040177 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:57.411639 containerd[1589]: time="2026-03-06T01:44:57.411498809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:57.414286 containerd[1589]: time="2026-03-06T01:44:57.414151708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 6 01:44:57.415936 containerd[1589]: time="2026-03-06T01:44:57.415878047Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:57.419282 containerd[1589]: time="2026-03-06T01:44:57.419179107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:57.420659 containerd[1589]: time="2026-03-06T01:44:57.420586480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 651.9229ms" Mar 6 01:44:57.420659 containerd[1589]: time="2026-03-06T01:44:57.420642390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 6 01:44:57.422287 containerd[1589]: time="2026-03-06T01:44:57.422228482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 6 01:44:57.426217 containerd[1589]: time="2026-03-06T01:44:57.426163406Z" level=info msg="CreateContainer within sandbox \"08e6c079794a866b95938b3a56830bf3bb3a5267346d24225522a9ef17a96112\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 6 01:44:57.445656 containerd[1589]: time="2026-03-06T01:44:57.445580556Z" level=info msg="CreateContainer within sandbox \"08e6c079794a866b95938b3a56830bf3bb3a5267346d24225522a9ef17a96112\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e31256a9d2b2f4b2190aa85da4701810eac12a7d83857746950207dc6969e7c6\"" Mar 6 01:44:57.446330 containerd[1589]: time="2026-03-06T01:44:57.446285840Z" level=info msg="StartContainer for \"e31256a9d2b2f4b2190aa85da4701810eac12a7d83857746950207dc6969e7c6\"" Mar 6 01:44:57.595334 containerd[1589]: time="2026-03-06T01:44:57.595289791Z" level=info msg="StartContainer for \"e31256a9d2b2f4b2190aa85da4701810eac12a7d83857746950207dc6969e7c6\" returns successfully" Mar 6 01:44:57.643274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e31256a9d2b2f4b2190aa85da4701810eac12a7d83857746950207dc6969e7c6-rootfs.mount: Deactivated successfully. Mar 6 01:44:57.657240 containerd[1589]: time="2026-03-06T01:44:57.656969873Z" level=info msg="shim disconnected" id=e31256a9d2b2f4b2190aa85da4701810eac12a7d83857746950207dc6969e7c6 namespace=k8s.io Mar 6 01:44:57.657240 containerd[1589]: time="2026-03-06T01:44:57.657146903Z" level=warning msg="cleaning up after shim disconnected" id=e31256a9d2b2f4b2190aa85da4701810eac12a7d83857746950207dc6969e7c6 namespace=k8s.io Mar 6 01:44:57.657240 containerd[1589]: time="2026-03-06T01:44:57.657159696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:44:58.198656 kubelet[2674]: E0306 01:44:58.198590 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rb5xg" podUID="71a31f04-c706-4601-874f-daa9f7b58ab6" Mar 6 01:44:58.528091 containerd[1589]: time="2026-03-06T01:44:58.527801824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:58.529071 containerd[1589]: time="2026-03-06T01:44:58.529003568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 6 01:44:58.530544 containerd[1589]: time="2026-03-06T01:44:58.530363524Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:58.540351 containerd[1589]: time="2026-03-06T01:44:58.540213899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:44:58.541144 containerd[1589]: time="2026-03-06T01:44:58.541105771Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.118830675s" Mar 6 01:44:58.541144 containerd[1589]: time="2026-03-06T01:44:58.541132769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 6 01:44:58.543525 containerd[1589]: time="2026-03-06T01:44:58.543280059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 6 01:44:58.575425 containerd[1589]: time="2026-03-06T01:44:58.575278327Z" level=info msg="CreateContainer within sandbox \"e4448b5334389456bdf179f856a8256bbd5234dee89791fc283afd989dfb6bae\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 6 01:44:58.592077 containerd[1589]: time="2026-03-06T01:44:58.592013712Z" level=info msg="CreateContainer within sandbox \"e4448b5334389456bdf179f856a8256bbd5234dee89791fc283afd989dfb6bae\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9f73f73afa99153c8fe140e3fc36e9d0042a01ea4bc62f2af9b9623f2cf40180\"" Mar 6 01:44:58.592885 containerd[1589]: time="2026-03-06T01:44:58.592858402Z" level=info msg="StartContainer for \"9f73f73afa99153c8fe140e3fc36e9d0042a01ea4bc62f2af9b9623f2cf40180\"" Mar 6 01:44:58.702853 containerd[1589]: time="2026-03-06T01:44:58.702785980Z" level=info msg="StartContainer for \"9f73f73afa99153c8fe140e3fc36e9d0042a01ea4bc62f2af9b9623f2cf40180\" returns successfully" Mar 6 01:44:59.413956 kubelet[2674]: E0306 01:44:59.413861 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:44:59.431219 kubelet[2674]: I0306 01:44:59.430337 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c76fdc545-qsr8p" podStartSLOduration=1.929044175 podStartE2EDuration="3.430319511s" podCreationTimestamp="2026-03-06 01:44:56 +0000 UTC" firstStartedPulling="2026-03-06 01:44:57.041761126 +0000 UTC m=+16.108489263" lastFinishedPulling="2026-03-06 01:44:58.543036452 +0000 UTC m=+17.609764599" observedRunningTime="2026-03-06 01:44:59.428731612 +0000 UTC m=+18.495459748" watchObservedRunningTime="2026-03-06 01:44:59.430319511 +0000 UTC m=+18.497047648" Mar 6 01:44:59.801195 update_engine[1573]: I20260306 01:44:59.800948 1573 update_attempter.cc:509] Updating boot flags... Mar 6 01:44:59.937537 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3358) Mar 6 01:45:00.017537 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3358) Mar 6 01:45:00.074179 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3358) Mar 6 01:45:00.204245 kubelet[2674]: E0306 01:45:00.203662 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rb5xg" podUID="71a31f04-c706-4601-874f-daa9f7b58ab6" Mar 6 01:45:00.417477 kubelet[2674]: I0306 01:45:00.417235 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:45:00.418348 kubelet[2674]: E0306 01:45:00.417696 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:02.199362 kubelet[2674]: E0306 01:45:02.199262 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rb5xg" podUID="71a31f04-c706-4601-874f-daa9f7b58ab6" Mar 6 01:45:02.860636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999672751.mount: Deactivated successfully. Mar 6 01:45:03.117664 containerd[1589]: time="2026-03-06T01:45:03.117500374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:03.119512 containerd[1589]: time="2026-03-06T01:45:03.119324651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 6 01:45:03.120819 containerd[1589]: time="2026-03-06T01:45:03.120772605Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:03.124335 containerd[1589]: time="2026-03-06T01:45:03.124188831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:03.125634 containerd[1589]: time="2026-03-06T01:45:03.125587965Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.582269546s" Mar 6 01:45:03.125634 containerd[1589]: time="2026-03-06T01:45:03.125633858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 6 01:45:03.132401 containerd[1589]: time="2026-03-06T01:45:03.132312139Z" level=info msg="CreateContainer within sandbox \"08e6c079794a866b95938b3a56830bf3bb3a5267346d24225522a9ef17a96112\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 6 01:45:03.173689 containerd[1589]: time="2026-03-06T01:45:03.173596307Z" level=info msg="CreateContainer within sandbox \"08e6c079794a866b95938b3a56830bf3bb3a5267346d24225522a9ef17a96112\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"68f0a114ade08301c9161ffb23f19d93533eaf34b91ea4a3f51aa75a91841377\"" Mar 6 01:45:03.174622 containerd[1589]: time="2026-03-06T01:45:03.174557767Z" level=info msg="StartContainer for \"68f0a114ade08301c9161ffb23f19d93533eaf34b91ea4a3f51aa75a91841377\"" Mar 6 01:45:03.307476 containerd[1589]: time="2026-03-06T01:45:03.307390498Z" level=info msg="StartContainer for \"68f0a114ade08301c9161ffb23f19d93533eaf34b91ea4a3f51aa75a91841377\" returns successfully" Mar 6 01:45:03.502689 containerd[1589]: time="2026-03-06T01:45:03.502615978Z" level=info msg="shim disconnected" id=68f0a114ade08301c9161ffb23f19d93533eaf34b91ea4a3f51aa75a91841377 namespace=k8s.io Mar 6 01:45:03.502689 containerd[1589]: time="2026-03-06T01:45:03.502678252Z" level=warning msg="cleaning up after shim disconnected" id=68f0a114ade08301c9161ffb23f19d93533eaf34b91ea4a3f51aa75a91841377 namespace=k8s.io Mar 6 01:45:03.502689 containerd[1589]: time="2026-03-06T01:45:03.502689722Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:45:03.860946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68f0a114ade08301c9161ffb23f19d93533eaf34b91ea4a3f51aa75a91841377-rootfs.mount: Deactivated successfully. Mar 6 01:45:04.199526 kubelet[2674]: E0306 01:45:04.199399 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rb5xg" podUID="71a31f04-c706-4601-874f-daa9f7b58ab6" Mar 6 01:45:04.431331 containerd[1589]: time="2026-03-06T01:45:04.431269068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 6 01:45:06.199357 kubelet[2674]: E0306 01:45:06.199295 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rb5xg" podUID="71a31f04-c706-4601-874f-daa9f7b58ab6" Mar 6 01:45:06.473414 containerd[1589]: time="2026-03-06T01:45:06.473189387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:06.474224 containerd[1589]: time="2026-03-06T01:45:06.474176456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 6 01:45:06.475585 containerd[1589]: time="2026-03-06T01:45:06.475534244Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:06.478657 containerd[1589]: time="2026-03-06T01:45:06.478562414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:06.479527 containerd[1589]: time="2026-03-06T01:45:06.479423165Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.04809581s" Mar 6 01:45:06.479527 containerd[1589]: time="2026-03-06T01:45:06.479508913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 6 01:45:06.485638 containerd[1589]: time="2026-03-06T01:45:06.485536756Z" level=info msg="CreateContainer within sandbox \"08e6c079794a866b95938b3a56830bf3bb3a5267346d24225522a9ef17a96112\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 6 01:45:06.504137 containerd[1589]: time="2026-03-06T01:45:06.504011924Z" level=info msg="CreateContainer within sandbox \"08e6c079794a866b95938b3a56830bf3bb3a5267346d24225522a9ef17a96112\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bb107f6b777d88118f16c1eb31bd70d4012eebca96108eca3ee8a8ea5939d650\"" Mar 6 01:45:06.504913 containerd[1589]: time="2026-03-06T01:45:06.504790261Z" level=info msg="StartContainer for \"bb107f6b777d88118f16c1eb31bd70d4012eebca96108eca3ee8a8ea5939d650\"" Mar 6 01:45:06.598251 containerd[1589]: time="2026-03-06T01:45:06.598112004Z" level=info msg="StartContainer for \"bb107f6b777d88118f16c1eb31bd70d4012eebca96108eca3ee8a8ea5939d650\" returns successfully" Mar 6 01:45:07.300798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb107f6b777d88118f16c1eb31bd70d4012eebca96108eca3ee8a8ea5939d650-rootfs.mount: Deactivated successfully. Mar 6 01:45:07.303682 containerd[1589]: time="2026-03-06T01:45:07.303627435Z" level=info msg="shim disconnected" id=bb107f6b777d88118f16c1eb31bd70d4012eebca96108eca3ee8a8ea5939d650 namespace=k8s.io Mar 6 01:45:07.303682 containerd[1589]: time="2026-03-06T01:45:07.303678408Z" level=warning msg="cleaning up after shim disconnected" id=bb107f6b777d88118f16c1eb31bd70d4012eebca96108eca3ee8a8ea5939d650 namespace=k8s.io Mar 6 01:45:07.303910 containerd[1589]: time="2026-03-06T01:45:07.303687635Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:45:07.347003 kubelet[2674]: I0306 01:45:07.346317 2674 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 6 01:45:07.486719 kubelet[2674]: I0306 01:45:07.486642 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drv4c\" (UniqueName: \"kubernetes.io/projected/9d37ef38-21de-4e05-9c50-3273af0abb2b-kube-api-access-drv4c\") pod \"coredns-674b8bbfcf-t7pfc\" (UID: \"9d37ef38-21de-4e05-9c50-3273af0abb2b\") " pod="kube-system/coredns-674b8bbfcf-t7pfc" Mar 6 01:45:07.486719 kubelet[2674]: I0306 01:45:07.486719 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89d93701-56c3-4cbf-a96c-5d197ed2b49a-tigera-ca-bundle\") pod \"calico-kube-controllers-c6d5dc557-wsvgg\" (UID: \"89d93701-56c3-4cbf-a96c-5d197ed2b49a\") " pod="calico-system/calico-kube-controllers-c6d5dc557-wsvgg" Mar 6 01:45:07.486913 kubelet[2674]: I0306 01:45:07.486750 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfr65\" (UniqueName: \"kubernetes.io/projected/89d93701-56c3-4cbf-a96c-5d197ed2b49a-kube-api-access-kfr65\") pod \"calico-kube-controllers-c6d5dc557-wsvgg\" (UID: \"89d93701-56c3-4cbf-a96c-5d197ed2b49a\") " pod="calico-system/calico-kube-controllers-c6d5dc557-wsvgg" Mar 6 01:45:07.486913 kubelet[2674]: I0306 01:45:07.486776 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsg7s\" (UniqueName: \"kubernetes.io/projected/03806442-2813-4de9-9d55-5690b43be899-kube-api-access-dsg7s\") pod \"goldmane-5b85766d88-ms9pq\" (UID: \"03806442-2813-4de9-9d55-5690b43be899\") " pod="calico-system/goldmane-5b85766d88-ms9pq" Mar 6 01:45:07.486913 kubelet[2674]: I0306 01:45:07.486843 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-nginx-config\") pod \"whisker-6546896c4d-b56z6\" (UID: \"d60390ac-fb3e-4a75-adc8-f0b708d60ef9\") " pod="calico-system/whisker-6546896c4d-b56z6" Mar 6 01:45:07.486913 kubelet[2674]: I0306 01:45:07.486885 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpg5l\" (UniqueName: \"kubernetes.io/projected/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-kube-api-access-bpg5l\") pod \"whisker-6546896c4d-b56z6\" (UID: \"d60390ac-fb3e-4a75-adc8-f0b708d60ef9\") " pod="calico-system/whisker-6546896c4d-b56z6" Mar 6 01:45:07.487081 kubelet[2674]: I0306 01:45:07.486919 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e0dc3ed-ceac-4026-8359-6c32f19c7d13-config-volume\") pod \"coredns-674b8bbfcf-8tj28\" (UID: \"3e0dc3ed-ceac-4026-8359-6c32f19c7d13\") " pod="kube-system/coredns-674b8bbfcf-8tj28" Mar 6 01:45:07.487081 kubelet[2674]: I0306 01:45:07.486963 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03806442-2813-4de9-9d55-5690b43be899-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-ms9pq\" (UID: \"03806442-2813-4de9-9d55-5690b43be899\") " pod="calico-system/goldmane-5b85766d88-ms9pq" Mar 6 01:45:07.487081 kubelet[2674]: I0306 01:45:07.487002 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d37ef38-21de-4e05-9c50-3273af0abb2b-config-volume\") pod \"coredns-674b8bbfcf-t7pfc\" (UID: \"9d37ef38-21de-4e05-9c50-3273af0abb2b\") " pod="kube-system/coredns-674b8bbfcf-t7pfc" Mar 6 01:45:07.487177 kubelet[2674]: I0306 01:45:07.487025 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-whisker-ca-bundle\") pod \"whisker-6546896c4d-b56z6\" (UID: \"d60390ac-fb3e-4a75-adc8-f0b708d60ef9\") " pod="calico-system/whisker-6546896c4d-b56z6" Mar 6 01:45:07.487273 kubelet[2674]: I0306 01:45:07.487237 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-whisker-backend-key-pair\") pod \"whisker-6546896c4d-b56z6\" (UID: \"d60390ac-fb3e-4a75-adc8-f0b708d60ef9\") " pod="calico-system/whisker-6546896c4d-b56z6" Mar 6 01:45:07.487347 kubelet[2674]: I0306 01:45:07.487313 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82fwq\" (UniqueName: \"kubernetes.io/projected/7d7dbfad-3005-4c2e-a3a4-8e06935c6c2f-kube-api-access-82fwq\") pod \"calico-apiserver-6b5896ddfd-dq4x8\" (UID: \"7d7dbfad-3005-4c2e-a3a4-8e06935c6c2f\") " pod="calico-system/calico-apiserver-6b5896ddfd-dq4x8" Mar 6 01:45:07.487394 kubelet[2674]: I0306 01:45:07.487364 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/72733de0-5a44-4e12-b2b1-5c75c2685fea-calico-apiserver-certs\") pod \"calico-apiserver-6b5896ddfd-x6rlz\" (UID: \"72733de0-5a44-4e12-b2b1-5c75c2685fea\") " pod="calico-system/calico-apiserver-6b5896ddfd-x6rlz" Mar 6 01:45:07.487649 kubelet[2674]: I0306 01:45:07.487392 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjkwp\" (UniqueName: \"kubernetes.io/projected/3e0dc3ed-ceac-4026-8359-6c32f19c7d13-kube-api-access-pjkwp\") pod \"coredns-674b8bbfcf-8tj28\" (UID: \"3e0dc3ed-ceac-4026-8359-6c32f19c7d13\") " pod="kube-system/coredns-674b8bbfcf-8tj28" Mar 6 01:45:07.487649 kubelet[2674]: I0306 01:45:07.487521 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7d7dbfad-3005-4c2e-a3a4-8e06935c6c2f-calico-apiserver-certs\") pod \"calico-apiserver-6b5896ddfd-dq4x8\" (UID: \"7d7dbfad-3005-4c2e-a3a4-8e06935c6c2f\") " pod="calico-system/calico-apiserver-6b5896ddfd-dq4x8" Mar 6 01:45:07.487649 kubelet[2674]: I0306 01:45:07.487549 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxkpt\" (UniqueName: \"kubernetes.io/projected/72733de0-5a44-4e12-b2b1-5c75c2685fea-kube-api-access-zxkpt\") pod \"calico-apiserver-6b5896ddfd-x6rlz\" (UID: \"72733de0-5a44-4e12-b2b1-5c75c2685fea\") " pod="calico-system/calico-apiserver-6b5896ddfd-x6rlz" Mar 6 01:45:07.487649 kubelet[2674]: I0306 01:45:07.487572 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03806442-2813-4de9-9d55-5690b43be899-config\") pod \"goldmane-5b85766d88-ms9pq\" (UID: \"03806442-2813-4de9-9d55-5690b43be899\") " pod="calico-system/goldmane-5b85766d88-ms9pq" Mar 6 01:45:07.487649 kubelet[2674]: I0306 01:45:07.487642 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/03806442-2813-4de9-9d55-5690b43be899-goldmane-key-pair\") pod \"goldmane-5b85766d88-ms9pq\" (UID: \"03806442-2813-4de9-9d55-5690b43be899\") " pod="calico-system/goldmane-5b85766d88-ms9pq" Mar 6 01:45:07.488293 containerd[1589]: time="2026-03-06T01:45:07.488232430Z" level=info msg="CreateContainer within sandbox \"08e6c079794a866b95938b3a56830bf3bb3a5267346d24225522a9ef17a96112\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 6 01:45:07.519700 containerd[1589]: time="2026-03-06T01:45:07.519643845Z" level=info msg="CreateContainer within sandbox \"08e6c079794a866b95938b3a56830bf3bb3a5267346d24225522a9ef17a96112\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"864338c59eb7c26fb89a4f8ec0bd00df9b491172e530bf646fb5532603d18f45\"" Mar 6 01:45:07.520548 containerd[1589]: time="2026-03-06T01:45:07.520492592Z" level=info msg="StartContainer for \"864338c59eb7c26fb89a4f8ec0bd00df9b491172e530bf646fb5532603d18f45\"" Mar 6 01:45:07.637860 containerd[1589]: time="2026-03-06T01:45:07.637689800Z" level=info msg="StartContainer for \"864338c59eb7c26fb89a4f8ec0bd00df9b491172e530bf646fb5532603d18f45\" returns successfully" Mar 6 01:45:07.699381 containerd[1589]: time="2026-03-06T01:45:07.699325302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6546896c4d-b56z6,Uid:d60390ac-fb3e-4a75-adc8-f0b708d60ef9,Namespace:calico-system,Attempt:0,}" Mar 6 01:45:07.714272 kubelet[2674]: E0306 01:45:07.713222 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:07.714519 containerd[1589]: time="2026-03-06T01:45:07.713952812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t7pfc,Uid:9d37ef38-21de-4e05-9c50-3273af0abb2b,Namespace:kube-system,Attempt:0,}" Mar 6 01:45:07.721662 kubelet[2674]: E0306 01:45:07.721637 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:07.722929 containerd[1589]: time="2026-03-06T01:45:07.722774035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8tj28,Uid:3e0dc3ed-ceac-4026-8359-6c32f19c7d13,Namespace:kube-system,Attempt:0,}" Mar 6 01:45:07.736765 containerd[1589]: time="2026-03-06T01:45:07.736694877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-ms9pq,Uid:03806442-2813-4de9-9d55-5690b43be899,Namespace:calico-system,Attempt:0,}" Mar 6 01:45:07.737198 containerd[1589]: time="2026-03-06T01:45:07.736978839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6d5dc557-wsvgg,Uid:89d93701-56c3-4cbf-a96c-5d197ed2b49a,Namespace:calico-system,Attempt:0,}" Mar 6 01:45:07.750075 containerd[1589]: time="2026-03-06T01:45:07.745378301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5896ddfd-dq4x8,Uid:7d7dbfad-3005-4c2e-a3a4-8e06935c6c2f,Namespace:calico-system,Attempt:0,}" Mar 6 01:45:07.750075 containerd[1589]: time="2026-03-06T01:45:07.745651910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5896ddfd-x6rlz,Uid:72733de0-5a44-4e12-b2b1-5c75c2685fea,Namespace:calico-system,Attempt:0,}" Mar 6 01:45:08.209172 containerd[1589]: time="2026-03-06T01:45:08.208786629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rb5xg,Uid:71a31f04-c706-4601-874f-daa9f7b58ab6,Namespace:calico-system,Attempt:0,}" Mar 6 01:45:08.327927 systemd-networkd[1250]: calia2aed348f5f: Link UP Mar 6 01:45:08.329955 systemd-networkd[1250]: calia2aed348f5f: Gained carrier Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:07.984 [ERROR][3559] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.038 [INFO][3559] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6546896c4d--b56z6-eth0 whisker-6546896c4d- calico-system d60390ac-fb3e-4a75-adc8-f0b708d60ef9 910 0 2026-03-06 01:44:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6546896c4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6546896c4d-b56z6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia2aed348f5f [] [] }} ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Namespace="calico-system" Pod="whisker-6546896c4d-b56z6" WorkloadEndpoint="localhost-k8s-whisker--6546896c4d--b56z6-" Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.038 [INFO][3559] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Namespace="calico-system" Pod="whisker-6546896c4d-b56z6" WorkloadEndpoint="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.137 [INFO][3691] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" HandleID="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Workload="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.166 [INFO][3691] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" HandleID="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Workload="localhost-k8s-whisker--6546896c4d--b56z6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000539780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6546896c4d-b56z6", "timestamp":"2026-03-06 01:45:08.137834526 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ff080)} Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.166 [INFO][3691] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.167 [INFO][3691] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.167 [INFO][3691] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.170 [INFO][3691] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" host="localhost" Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.238 [INFO][3691] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.258 [INFO][3691] ipam/ipam.go 558: Ran out of existing affine blocks for host host="localhost" Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.262 [INFO][3691] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.265 [INFO][3691] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.88.128/26 Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.265 [INFO][3691] ipam/ipam.go 588: Found unclaimed block in 3.478814ms host="localhost" subnet=192.168.88.128/26 Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.265 [INFO][3691] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.271 [INFO][3691] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="localhost" subnet=192.168.88.128/26 Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.271 [INFO][3691] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.273 [INFO][3691] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.279 [INFO][3691] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.281 [INFO][3691] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.281 [INFO][3691] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="localhost" subnet=192.168.88.128/26 Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.287 [INFO][3691] ipam/ipam_block_reader_writer.go 267: Successfully created block Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.287 [INFO][3691] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="localhost" subnet=192.168.88.128/26 Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.291 [INFO][3691] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="localhost" subnet=192.168.88.128/26 Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.292 [INFO][3691] ipam/ipam.go 623: Block '192.168.88.128/26' has 64 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.292 [INFO][3691] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" host="localhost" Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.296 [INFO][3691] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d Mar 6 01:45:08.353837 containerd[1589]: 2026-03-06 01:45:08.301 [INFO][3691] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" host="localhost" Mar 6 01:45:08.354962 containerd[1589]: 2026-03-06 01:45:08.307 [INFO][3691] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.128/26] block=192.168.88.128/26 handle="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" host="localhost" Mar 6 01:45:08.354962 containerd[1589]: 2026-03-06 01:45:08.307 [INFO][3691] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.128/26] handle="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" host="localhost" Mar 6 01:45:08.354962 containerd[1589]: 2026-03-06 01:45:08.307 [INFO][3691] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:45:08.354962 containerd[1589]: 2026-03-06 01:45:08.307 [INFO][3691] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.128/26] IPv6=[] ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" HandleID="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Workload="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:08.354962 containerd[1589]: 2026-03-06 01:45:08.312 [INFO][3559] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Namespace="calico-system" Pod="whisker-6546896c4d-b56z6" WorkloadEndpoint="localhost-k8s-whisker--6546896c4d--b56z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6546896c4d--b56z6-eth0", GenerateName:"whisker-6546896c4d-", Namespace:"calico-system", SelfLink:"", UID:"d60390ac-fb3e-4a75-adc8-f0b708d60ef9", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6546896c4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6546896c4d-b56z6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia2aed348f5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:08.354962 containerd[1589]: 2026-03-06 01:45:08.312 [INFO][3559] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.128/32] ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Namespace="calico-system" Pod="whisker-6546896c4d-b56z6" WorkloadEndpoint="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:08.354962 containerd[1589]: 2026-03-06 01:45:08.312 [INFO][3559] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2aed348f5f ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Namespace="calico-system" Pod="whisker-6546896c4d-b56z6" WorkloadEndpoint="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:08.354962 containerd[1589]: 2026-03-06 01:45:08.330 [INFO][3559] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Namespace="calico-system" Pod="whisker-6546896c4d-b56z6" WorkloadEndpoint="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:08.354962 containerd[1589]: 2026-03-06 01:45:08.330 [INFO][3559] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Namespace="calico-system" Pod="whisker-6546896c4d-b56z6" WorkloadEndpoint="localhost-k8s-whisker--6546896c4d--b56z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6546896c4d--b56z6-eth0", GenerateName:"whisker-6546896c4d-", Namespace:"calico-system", SelfLink:"", UID:"d60390ac-fb3e-4a75-adc8-f0b708d60ef9", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6546896c4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d", Pod:"whisker-6546896c4d-b56z6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia2aed348f5f", MAC:"4a:78:1a:f7:8e:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:08.354962 containerd[1589]: 2026-03-06 01:45:08.345 [INFO][3559] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Namespace="calico-system" Pod="whisker-6546896c4d-b56z6" WorkloadEndpoint="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:08.381787 systemd-networkd[1250]: cali7cfeb59f89a: Link UP Mar 6 01:45:08.383259 systemd-networkd[1250]: cali7cfeb59f89a: Gained carrier Mar 6 01:45:08.400047 containerd[1589]: time="2026-03-06T01:45:08.399852227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:45:08.400047 containerd[1589]: time="2026-03-06T01:45:08.399990561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:45:08.400201 containerd[1589]: time="2026-03-06T01:45:08.400003554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:08.400352 containerd[1589]: time="2026-03-06T01:45:08.400293786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:07.986 [ERROR][3601] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.039 [INFO][3601] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c6d5dc557--wsvgg-eth0 calico-kube-controllers-c6d5dc557- calico-system 89d93701-56c3-4cbf-a96c-5d197ed2b49a 890 0 2026-03-06 01:44:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c6d5dc557 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c6d5dc557-wsvgg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7cfeb59f89a [] [] }} ContainerID="869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" Namespace="calico-system" Pod="calico-kube-controllers-c6d5dc557-wsvgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d5dc557--wsvgg-" Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.039 [INFO][3601] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" Namespace="calico-system" Pod="calico-kube-controllers-c6d5dc557-wsvgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d5dc557--wsvgg-eth0" Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.172 [INFO][3709] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" HandleID="k8s-pod-network.869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" Workload="localhost-k8s-calico--kube--controllers--c6d5dc557--wsvgg-eth0" Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.186 [INFO][3709] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" HandleID="k8s-pod-network.869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" Workload="localhost-k8s-calico--kube--controllers--c6d5dc557--wsvgg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000117bb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c6d5dc557-wsvgg", "timestamp":"2026-03-06 01:45:08.172719149 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000c1340)} Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.186 [INFO][3709] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.307 [INFO][3709] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.307 [INFO][3709] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.310 [INFO][3709] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" host="localhost" Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.338 [INFO][3709] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.347 [INFO][3709] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.352 [INFO][3709] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.356 [INFO][3709] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.356 [INFO][3709] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" host="localhost" Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.361 [INFO][3709] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.366 [INFO][3709] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" host="localhost" Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.374 [INFO][3709] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" host="localhost" Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.374 [INFO][3709] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" host="localhost" Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.374 [INFO][3709] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:45:08.402039 containerd[1589]: 2026-03-06 01:45:08.374 [INFO][3709] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" HandleID="k8s-pod-network.869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" Workload="localhost-k8s-calico--kube--controllers--c6d5dc557--wsvgg-eth0" Mar 6 01:45:08.403210 containerd[1589]: 2026-03-06 01:45:08.378 [INFO][3601] cni-plugin/k8s.go 418: Populated endpoint ContainerID="869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" Namespace="calico-system" Pod="calico-kube-controllers-c6d5dc557-wsvgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d5dc557--wsvgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6d5dc557--wsvgg-eth0", GenerateName:"calico-kube-controllers-c6d5dc557-", Namespace:"calico-system", SelfLink:"", UID:"89d93701-56c3-4cbf-a96c-5d197ed2b49a", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6d5dc557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c6d5dc557-wsvgg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7cfeb59f89a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:08.403210 containerd[1589]: 2026-03-06 01:45:08.378 [INFO][3601] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" Namespace="calico-system" Pod="calico-kube-controllers-c6d5dc557-wsvgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d5dc557--wsvgg-eth0" Mar 6 01:45:08.403210 containerd[1589]: 2026-03-06 01:45:08.378 [INFO][3601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cfeb59f89a ContainerID="869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" Namespace="calico-system" Pod="calico-kube-controllers-c6d5dc557-wsvgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d5dc557--wsvgg-eth0" Mar 6 01:45:08.403210 containerd[1589]: 2026-03-06 01:45:08.384 [INFO][3601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" Namespace="calico-system" Pod="calico-kube-controllers-c6d5dc557-wsvgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d5dc557--wsvgg-eth0" Mar 6 01:45:08.403210 containerd[1589]: 2026-03-06 01:45:08.385 [INFO][3601] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" Namespace="calico-system" Pod="calico-kube-controllers-c6d5dc557-wsvgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d5dc557--wsvgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6d5dc557--wsvgg-eth0", GenerateName:"calico-kube-controllers-c6d5dc557-", Namespace:"calico-system", SelfLink:"", UID:"89d93701-56c3-4cbf-a96c-5d197ed2b49a", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6d5dc557", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd", Pod:"calico-kube-controllers-c6d5dc557-wsvgg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7cfeb59f89a", MAC:"fa:fa:e3:11:ef:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:08.403210 containerd[1589]: 2026-03-06 01:45:08.397 [INFO][3601] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd" Namespace="calico-system" Pod="calico-kube-controllers-c6d5dc557-wsvgg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6d5dc557--wsvgg-eth0" Mar 6 01:45:08.441169 containerd[1589]: time="2026-03-06T01:45:08.440273254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:45:08.441169 containerd[1589]: time="2026-03-06T01:45:08.440329387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:45:08.441169 containerd[1589]: time="2026-03-06T01:45:08.440343162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:08.441169 containerd[1589]: time="2026-03-06T01:45:08.440496715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:08.462549 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:45:08.486906 systemd-networkd[1250]: cali570dc42febe: Link UP Mar 6 01:45:08.487142 systemd-networkd[1250]: cali570dc42febe: Gained carrier Mar 6 01:45:08.492389 kubelet[2674]: I0306 01:45:08.491946 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7wpdt" podStartSLOduration=2.779289633 podStartE2EDuration="12.491931604s" podCreationTimestamp="2026-03-06 01:44:56 +0000 UTC" firstStartedPulling="2026-03-06 01:44:56.76802556 +0000 UTC m=+15.834753697" lastFinishedPulling="2026-03-06 01:45:06.480667531 +0000 UTC m=+25.547395668" observedRunningTime="2026-03-06 01:45:08.489119809 +0000 UTC m=+27.555847946" watchObservedRunningTime="2026-03-06 01:45:08.491931604 +0000 UTC m=+27.558659740" Mar 6 01:45:08.500263 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:07.985 [ERROR][3614] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.038 [INFO][3614] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b5896ddfd--dq4x8-eth0 calico-apiserver-6b5896ddfd- calico-system 7d7dbfad-3005-4c2e-a3a4-8e06935c6c2f 889 0 2026-03-06 01:44:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b5896ddfd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b5896ddfd-dq4x8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali570dc42febe [] [] }} ContainerID="9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-dq4x8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--dq4x8-" Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.038 [INFO][3614] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-dq4x8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--dq4x8-eth0" Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.196 [INFO][3698] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" HandleID="k8s-pod-network.9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" Workload="localhost-k8s-calico--apiserver--6b5896ddfd--dq4x8-eth0" Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.208 [INFO][3698] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" HandleID="k8s-pod-network.9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" Workload="localhost-k8s-calico--apiserver--6b5896ddfd--dq4x8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e06f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6b5896ddfd-dq4x8", "timestamp":"2026-03-06 01:45:08.196641169 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00031c580)} Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.208 [INFO][3698] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.374 [INFO][3698] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.374 [INFO][3698] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.412 [INFO][3698] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" host="localhost" Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.439 [INFO][3698] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.448 [INFO][3698] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.451 [INFO][3698] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.454 [INFO][3698] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.454 [INFO][3698] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" host="localhost" Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.457 [INFO][3698] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70 Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.463 [INFO][3698] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" host="localhost" Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.472 [INFO][3698] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" host="localhost" Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.472 [INFO][3698] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" host="localhost" Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.472 [INFO][3698] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:45:08.534119 containerd[1589]: 2026-03-06 01:45:08.472 [INFO][3698] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" HandleID="k8s-pod-network.9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" Workload="localhost-k8s-calico--apiserver--6b5896ddfd--dq4x8-eth0" Mar 6 01:45:08.535205 containerd[1589]: 2026-03-06 01:45:08.484 [INFO][3614] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-dq4x8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--dq4x8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5896ddfd--dq4x8-eth0", GenerateName:"calico-apiserver-6b5896ddfd-", Namespace:"calico-system", SelfLink:"", UID:"7d7dbfad-3005-4c2e-a3a4-8e06935c6c2f", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5896ddfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b5896ddfd-dq4x8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali570dc42febe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:08.535205 containerd[1589]: 2026-03-06 01:45:08.484 [INFO][3614] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-dq4x8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--dq4x8-eth0" Mar 6 01:45:08.535205 containerd[1589]: 2026-03-06 01:45:08.484 [INFO][3614] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali570dc42febe ContainerID="9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-dq4x8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--dq4x8-eth0" Mar 6 01:45:08.535205 containerd[1589]: 2026-03-06 01:45:08.490 [INFO][3614] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-dq4x8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--dq4x8-eth0" Mar 6 01:45:08.535205 containerd[1589]: 2026-03-06 01:45:08.491 [INFO][3614] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-dq4x8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--dq4x8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5896ddfd--dq4x8-eth0", GenerateName:"calico-apiserver-6b5896ddfd-", Namespace:"calico-system", SelfLink:"", UID:"7d7dbfad-3005-4c2e-a3a4-8e06935c6c2f", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5896ddfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70", Pod:"calico-apiserver-6b5896ddfd-dq4x8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali570dc42febe", MAC:"5e:38:91:60:1c:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:08.535205 containerd[1589]: 2026-03-06 01:45:08.513 [INFO][3614] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-dq4x8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--dq4x8-eth0" Mar 6 01:45:08.544714 containerd[1589]: time="2026-03-06T01:45:08.543649095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6546896c4d-b56z6,Uid:d60390ac-fb3e-4a75-adc8-f0b708d60ef9,Namespace:calico-system,Attempt:0,} returns sandbox id \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\"" Mar 6 01:45:08.546532 containerd[1589]: time="2026-03-06T01:45:08.545987904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 6 01:45:08.563596 containerd[1589]: time="2026-03-06T01:45:08.563502373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6d5dc557-wsvgg,Uid:89d93701-56c3-4cbf-a96c-5d197ed2b49a,Namespace:calico-system,Attempt:0,} returns sandbox id \"869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd\"" Mar 6 01:45:08.568264 containerd[1589]: time="2026-03-06T01:45:08.568105646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:45:08.568264 containerd[1589]: time="2026-03-06T01:45:08.568158553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:45:08.568264 containerd[1589]: time="2026-03-06T01:45:08.568170304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:08.568428 containerd[1589]: time="2026-03-06T01:45:08.568259909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:08.587139 systemd-networkd[1250]: caliac8afc0c3a0: Link UP Mar 6 01:45:08.588732 systemd-networkd[1250]: caliac8afc0c3a0: Gained carrier Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:07.975 [ERROR][3581] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.038 [INFO][3581] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--8tj28-eth0 coredns-674b8bbfcf- kube-system 3e0dc3ed-ceac-4026-8359-6c32f19c7d13 893 0 2026-03-06 01:44:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-8tj28 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliac8afc0c3a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tj28" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8tj28-" Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.038 [INFO][3581] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tj28" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8tj28-eth0" Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.192 [INFO][3694] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" HandleID="k8s-pod-network.304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" Workload="localhost-k8s-coredns--674b8bbfcf--8tj28-eth0" Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.209 [INFO][3694] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" HandleID="k8s-pod-network.304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" Workload="localhost-k8s-coredns--674b8bbfcf--8tj28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ec60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-8tj28", "timestamp":"2026-03-06 01:45:08.192124171 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000196f20)} Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.210 [INFO][3694] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.472 [INFO][3694] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.472 [INFO][3694] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.514 [INFO][3694] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" host="localhost" Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.540 [INFO][3694] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.550 [INFO][3694] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.552 [INFO][3694] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.555 [INFO][3694] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.556 [INFO][3694] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" host="localhost" Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.558 [INFO][3694] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4 Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.562 [INFO][3694] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" host="localhost" Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.572 [INFO][3694] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" host="localhost" Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.572 [INFO][3694] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" host="localhost" Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.572 [INFO][3694] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:45:08.608606 containerd[1589]: 2026-03-06 01:45:08.572 [INFO][3694] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" HandleID="k8s-pod-network.304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" Workload="localhost-k8s-coredns--674b8bbfcf--8tj28-eth0" Mar 6 01:45:08.609701 containerd[1589]: 2026-03-06 01:45:08.576 [INFO][3581] cni-plugin/k8s.go 418: Populated endpoint ContainerID="304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tj28" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8tj28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8tj28-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3e0dc3ed-ceac-4026-8359-6c32f19c7d13", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-8tj28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac8afc0c3a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:08.609701 containerd[1589]: 2026-03-06 01:45:08.577 [INFO][3581] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tj28" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8tj28-eth0" Mar 6 01:45:08.609701 containerd[1589]: 2026-03-06 01:45:08.577 [INFO][3581] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac8afc0c3a0 ContainerID="304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tj28" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8tj28-eth0" Mar 6 01:45:08.609701 containerd[1589]: 2026-03-06 01:45:08.591 [INFO][3581] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tj28" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8tj28-eth0" Mar 6 01:45:08.609701 containerd[1589]: 2026-03-06 01:45:08.592 [INFO][3581] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tj28" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8tj28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--8tj28-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3e0dc3ed-ceac-4026-8359-6c32f19c7d13", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4", Pod:"coredns-674b8bbfcf-8tj28", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac8afc0c3a0", MAC:"32:2d:10:4f:dc:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:08.609701 containerd[1589]: 2026-03-06 01:45:08.604 [INFO][3581] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8tj28" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--8tj28-eth0" Mar 6 01:45:08.614009 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:45:08.637541 containerd[1589]: time="2026-03-06T01:45:08.637229588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:45:08.638889 containerd[1589]: time="2026-03-06T01:45:08.638793811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:45:08.639146 containerd[1589]: time="2026-03-06T01:45:08.638894455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:08.639304 containerd[1589]: time="2026-03-06T01:45:08.639247805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:08.662757 containerd[1589]: time="2026-03-06T01:45:08.662686407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5896ddfd-dq4x8,Uid:7d7dbfad-3005-4c2e-a3a4-8e06935c6c2f,Namespace:calico-system,Attempt:0,} returns sandbox id \"9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70\"" Mar 6 01:45:08.690943 systemd-networkd[1250]: cali7d613bb7238: Link UP Mar 6 01:45:08.690983 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:45:08.691238 systemd-networkd[1250]: cali7d613bb7238: Gained carrier Mar 6 01:45:08.712401 containerd[1589]: 2026-03-06 01:45:08.155 [INFO][3699] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1" Mar 6 01:45:08.712401 containerd[1589]: 2026-03-06 01:45:08.159 [INFO][3699] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1" iface="eth0" netns="/var/run/netns/cni-30ad771d-782f-8960-7c07-b0300689c8cb" Mar 6 01:45:08.712401 containerd[1589]: 2026-03-06 01:45:08.160 [INFO][3699] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1" iface="eth0" netns="/var/run/netns/cni-30ad771d-782f-8960-7c07-b0300689c8cb" Mar 6 01:45:08.712401 containerd[1589]: 2026-03-06 01:45:08.173 [INFO][3699] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1" iface="eth0" netns="/var/run/netns/cni-30ad771d-782f-8960-7c07-b0300689c8cb" Mar 6 01:45:08.712401 containerd[1589]: 2026-03-06 01:45:08.173 [INFO][3699] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1" Mar 6 01:45:08.712401 containerd[1589]: 2026-03-06 01:45:08.173 [INFO][3699] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1" Mar 6 01:45:08.712401 containerd[1589]: 2026-03-06 01:45:08.230 [INFO][3740] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1" HandleID="k8s-pod-network.d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1" Workload="localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0" Mar 6 01:45:08.712401 containerd[1589]: 2026-03-06 01:45:08.230 [INFO][3740] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:45:08.712401 containerd[1589]: 2026-03-06 01:45:08.678 [INFO][3740] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:45:08.712401 containerd[1589]: 2026-03-06 01:45:08.692 [WARNING][3740] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1" HandleID="k8s-pod-network.d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1" Workload="localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0" Mar 6 01:45:08.712401 containerd[1589]: 2026-03-06 01:45:08.692 [INFO][3740] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1" HandleID="k8s-pod-network.d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1" Workload="localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0" Mar 6 01:45:08.712401 containerd[1589]: 2026-03-06 01:45:08.696 [INFO][3740] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:45:08.712401 containerd[1589]: 2026-03-06 01:45:08.705 [INFO][3699] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1" Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:07.997 [ERROR][3632] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.045 [INFO][3632] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b5896ddfd--x6rlz-eth0 calico-apiserver-6b5896ddfd- calico-system 72733de0-5a44-4e12-b2b1-5c75c2685fea 891 0 2026-03-06 01:44:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b5896ddfd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b5896ddfd-x6rlz eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali7d613bb7238 [] [] }} ContainerID="a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-x6rlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--x6rlz-" Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.045 [INFO][3632] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-x6rlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--x6rlz-eth0" Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.204 [INFO][3701] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" HandleID="k8s-pod-network.a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" Workload="localhost-k8s-calico--apiserver--6b5896ddfd--x6rlz-eth0" Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.225 [INFO][3701] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" HandleID="k8s-pod-network.a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" Workload="localhost-k8s-calico--apiserver--6b5896ddfd--x6rlz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef030), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6b5896ddfd-x6rlz", "timestamp":"2026-03-06 01:45:08.204338467 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004d6dc0)} Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.225 [INFO][3701] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.572 [INFO][3701] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.573 [INFO][3701] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.613 [INFO][3701] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" host="localhost" Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.641 [INFO][3701] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.649 [INFO][3701] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.651 [INFO][3701] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.654 [INFO][3701] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.655 [INFO][3701] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" host="localhost" Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.657 [INFO][3701] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173 Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.663 [INFO][3701] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" host="localhost" Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.678 [INFO][3701] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" host="localhost" Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.678 [INFO][3701] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" host="localhost" Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.678 [INFO][3701] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:45:08.729497 containerd[1589]: 2026-03-06 01:45:08.678 [INFO][3701] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" HandleID="k8s-pod-network.a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" Workload="localhost-k8s-calico--apiserver--6b5896ddfd--x6rlz-eth0" Mar 6 01:45:08.731438 containerd[1589]: 2026-03-06 01:45:08.684 [INFO][3632] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-x6rlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--x6rlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5896ddfd--x6rlz-eth0", GenerateName:"calico-apiserver-6b5896ddfd-", Namespace:"calico-system", SelfLink:"", UID:"72733de0-5a44-4e12-b2b1-5c75c2685fea", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5896ddfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b5896ddfd-x6rlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7d613bb7238", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:08.731438 containerd[1589]: 2026-03-06 01:45:08.685 [INFO][3632] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-x6rlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--x6rlz-eth0" Mar 6 01:45:08.731438 containerd[1589]: 2026-03-06 01:45:08.685 [INFO][3632] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d613bb7238 ContainerID="a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-x6rlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--x6rlz-eth0" Mar 6 01:45:08.731438 containerd[1589]: 2026-03-06 01:45:08.700 [INFO][3632] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-x6rlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--x6rlz-eth0" Mar 6 01:45:08.731438 containerd[1589]: 2026-03-06 01:45:08.701 [INFO][3632] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-x6rlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--x6rlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5896ddfd--x6rlz-eth0", GenerateName:"calico-apiserver-6b5896ddfd-", Namespace:"calico-system", SelfLink:"", UID:"72733de0-5a44-4e12-b2b1-5c75c2685fea", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5896ddfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173", Pod:"calico-apiserver-6b5896ddfd-x6rlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7d613bb7238", MAC:"5e:2c:24:22:cd:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:08.731438 containerd[1589]: 2026-03-06 01:45:08.718 [INFO][3632] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173" Namespace="calico-system" Pod="calico-apiserver-6b5896ddfd-x6rlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5896ddfd--x6rlz-eth0" Mar 6 01:45:08.746519 containerd[1589]: time="2026-03-06T01:45:08.744817269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8tj28,Uid:3e0dc3ed-ceac-4026-8359-6c32f19c7d13,Namespace:kube-system,Attempt:0,} returns sandbox id \"304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4\"" Mar 6 01:45:08.747873 containerd[1589]: time="2026-03-06T01:45:08.747736049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t7pfc,Uid:9d37ef38-21de-4e05-9c50-3273af0abb2b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:45:08.748134 kubelet[2674]: E0306 01:45:08.748072 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:08.758777 kubelet[2674]: E0306 01:45:08.758585 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:45:08.758777 kubelet[2674]: E0306 01:45:08.758656 2674 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t7pfc" Mar 6 01:45:08.758777 kubelet[2674]: E0306 01:45:08.758690 2674 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t7pfc" Mar 6 01:45:08.759006 kubelet[2674]: E0306 01:45:08.758738 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-t7pfc_kube-system(9d37ef38-21de-4e05-9c50-3273af0abb2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-t7pfc_kube-system(9d37ef38-21de-4e05-9c50-3273af0abb2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t7pfc" podUID="9d37ef38-21de-4e05-9c50-3273af0abb2b" Mar 6 01:45:08.762958 containerd[1589]: time="2026-03-06T01:45:08.762913236Z" level=info msg="CreateContainer within sandbox \"304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:45:08.764265 containerd[1589]: 2026-03-06 01:45:08.223 [INFO][3684] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e" Mar 6 01:45:08.764265 containerd[1589]: 2026-03-06 01:45:08.223 [INFO][3684] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e" iface="eth0" netns="/var/run/netns/cni-54ee95aa-8b27-35f6-5e1a-f472b8992d3e" Mar 6 01:45:08.764265 containerd[1589]: 2026-03-06 01:45:08.224 [INFO][3684] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e" iface="eth0" netns="/var/run/netns/cni-54ee95aa-8b27-35f6-5e1a-f472b8992d3e" Mar 6 01:45:08.764265 containerd[1589]: 2026-03-06 01:45:08.224 [INFO][3684] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e" iface="eth0" netns="/var/run/netns/cni-54ee95aa-8b27-35f6-5e1a-f472b8992d3e" Mar 6 01:45:08.764265 containerd[1589]: 2026-03-06 01:45:08.224 [INFO][3684] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e" Mar 6 01:45:08.764265 containerd[1589]: 2026-03-06 01:45:08.224 [INFO][3684] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e" Mar 6 01:45:08.764265 containerd[1589]: 2026-03-06 01:45:08.275 [INFO][3757] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e" HandleID="k8s-pod-network.9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e" Workload="localhost-k8s-goldmane--5b85766d88--ms9pq-eth0" Mar 6 01:45:08.764265 containerd[1589]: 2026-03-06 01:45:08.276 [INFO][3757] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:45:08.764265 containerd[1589]: 2026-03-06 01:45:08.697 [INFO][3757] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:45:08.764265 containerd[1589]: 2026-03-06 01:45:08.735 [WARNING][3757] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e" HandleID="k8s-pod-network.9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e" Workload="localhost-k8s-goldmane--5b85766d88--ms9pq-eth0" Mar 6 01:45:08.764265 containerd[1589]: 2026-03-06 01:45:08.736 [INFO][3757] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e" HandleID="k8s-pod-network.9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e" Workload="localhost-k8s-goldmane--5b85766d88--ms9pq-eth0" Mar 6 01:45:08.764265 containerd[1589]: 2026-03-06 01:45:08.739 [INFO][3757] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:45:08.764265 containerd[1589]: 2026-03-06 01:45:08.751 [INFO][3684] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e" Mar 6 01:45:08.770308 containerd[1589]: time="2026-03-06T01:45:08.770261705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-ms9pq,Uid:03806442-2813-4de9-9d55-5690b43be899,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:45:08.771644 kubelet[2674]: E0306 01:45:08.770997 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 01:45:08.771644 kubelet[2674]: E0306 01:45:08.771074 2674 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-ms9pq" Mar 6 01:45:08.771644 kubelet[2674]: E0306 01:45:08.771109 2674 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-ms9pq" Mar 6 01:45:08.771824 kubelet[2674]: E0306 01:45:08.771164 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-ms9pq_calico-system(03806442-2813-4de9-9d55-5690b43be899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-ms9pq_calico-system(03806442-2813-4de9-9d55-5690b43be899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-ms9pq" podUID="03806442-2813-4de9-9d55-5690b43be899" Mar 6 01:45:08.787848 containerd[1589]: time="2026-03-06T01:45:08.787711272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:45:08.787848 containerd[1589]: time="2026-03-06T01:45:08.787777945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:45:08.787848 containerd[1589]: time="2026-03-06T01:45:08.787828657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:08.790069 containerd[1589]: time="2026-03-06T01:45:08.787945312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:08.803281 systemd-networkd[1250]: caliabee16c2d63: Link UP Mar 6 01:45:08.806526 systemd-networkd[1250]: caliabee16c2d63: Gained carrier Mar 6 01:45:08.825437 containerd[1589]: time="2026-03-06T01:45:08.825197395Z" level=info msg="CreateContainer within sandbox \"304905779ed2b2e50cb6933262167b4e7dc1427cfbf86b9367374be1d14ce0f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f3071383b183871110e341d6afa857441137d1aee11bc2243e0d34ed8ac5e0e9\"" Mar 6 01:45:08.827356 containerd[1589]: time="2026-03-06T01:45:08.827295634Z" level=info msg="StartContainer for \"f3071383b183871110e341d6afa857441137d1aee11bc2243e0d34ed8ac5e0e9\"" Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.284 [ERROR][3765] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.301 [INFO][3765] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rb5xg-eth0 csi-node-driver- calico-system 71a31f04-c706-4601-874f-daa9f7b58ab6 765 0 2026-03-06 01:44:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rb5xg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliabee16c2d63 [] [] }} ContainerID="018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" Namespace="calico-system" Pod="csi-node-driver-rb5xg" WorkloadEndpoint="localhost-k8s-csi--node--driver--rb5xg-" Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.302 [INFO][3765] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" Namespace="calico-system" Pod="csi-node-driver-rb5xg" WorkloadEndpoint="localhost-k8s-csi--node--driver--rb5xg-eth0" Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.353 [INFO][3781] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" HandleID="k8s-pod-network.018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" Workload="localhost-k8s-csi--node--driver--rb5xg-eth0" Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.364 [INFO][3781] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" HandleID="k8s-pod-network.018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" Workload="localhost-k8s-csi--node--driver--rb5xg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000366150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rb5xg", "timestamp":"2026-03-06 01:45:08.353121642 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fe840)} Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.364 [INFO][3781] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.739 [INFO][3781] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.739 [INFO][3781] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.743 [INFO][3781] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" host="localhost" Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.751 [INFO][3781] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.758 [INFO][3781] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.761 [INFO][3781] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.767 [INFO][3781] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.767 [INFO][3781] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" host="localhost" Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.770 [INFO][3781] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.780 [INFO][3781] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" host="localhost" Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.790 [INFO][3781] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" host="localhost" Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.790 [INFO][3781] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" host="localhost" Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.790 [INFO][3781] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:45:08.829590 containerd[1589]: 2026-03-06 01:45:08.790 [INFO][3781] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" HandleID="k8s-pod-network.018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" Workload="localhost-k8s-csi--node--driver--rb5xg-eth0" Mar 6 01:45:08.830163 containerd[1589]: 2026-03-06 01:45:08.795 [INFO][3765] cni-plugin/k8s.go 418: Populated endpoint ContainerID="018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" Namespace="calico-system" Pod="csi-node-driver-rb5xg" WorkloadEndpoint="localhost-k8s-csi--node--driver--rb5xg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rb5xg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71a31f04-c706-4601-874f-daa9f7b58ab6", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rb5xg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliabee16c2d63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:08.830163 containerd[1589]: 2026-03-06 01:45:08.796 [INFO][3765] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" Namespace="calico-system" Pod="csi-node-driver-rb5xg" WorkloadEndpoint="localhost-k8s-csi--node--driver--rb5xg-eth0" Mar 6 01:45:08.830163 containerd[1589]: 2026-03-06 01:45:08.796 [INFO][3765] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabee16c2d63 ContainerID="018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" Namespace="calico-system" Pod="csi-node-driver-rb5xg" WorkloadEndpoint="localhost-k8s-csi--node--driver--rb5xg-eth0" Mar 6 01:45:08.830163 containerd[1589]: 2026-03-06 01:45:08.807 [INFO][3765] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" Namespace="calico-system" Pod="csi-node-driver-rb5xg" WorkloadEndpoint="localhost-k8s-csi--node--driver--rb5xg-eth0" Mar 6 01:45:08.830163 containerd[1589]: 2026-03-06 01:45:08.811 [INFO][3765] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" Namespace="calico-system" Pod="csi-node-driver-rb5xg" WorkloadEndpoint="localhost-k8s-csi--node--driver--rb5xg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rb5xg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71a31f04-c706-4601-874f-daa9f7b58ab6", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc", Pod:"csi-node-driver-rb5xg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliabee16c2d63", MAC:"12:b1:4e:9d:a7:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:08.830163 containerd[1589]: 2026-03-06 01:45:08.823 [INFO][3765] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc" Namespace="calico-system" Pod="csi-node-driver-rb5xg" WorkloadEndpoint="localhost-k8s-csi--node--driver--rb5xg-eth0" Mar 6 01:45:08.847201 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:45:08.859780 kubelet[2674]: I0306 01:45:08.859735 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:45:08.861307 kubelet[2674]: E0306 01:45:08.860614 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:08.875819 containerd[1589]: time="2026-03-06T01:45:08.875731760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:45:08.876703 containerd[1589]: time="2026-03-06T01:45:08.876520448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:45:08.877024 containerd[1589]: time="2026-03-06T01:45:08.876835277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:08.877024 containerd[1589]: time="2026-03-06T01:45:08.876969394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:08.923392 containerd[1589]: time="2026-03-06T01:45:08.923350938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5896ddfd-x6rlz,Uid:72733de0-5a44-4e12-b2b1-5c75c2685fea,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173\"" Mar 6 01:45:08.938864 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:45:08.967048 containerd[1589]: time="2026-03-06T01:45:08.966980161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rb5xg,Uid:71a31f04-c706-4601-874f-daa9f7b58ab6,Namespace:calico-system,Attempt:0,} returns sandbox id \"018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc\"" Mar 6 01:45:08.973391 containerd[1589]: time="2026-03-06T01:45:08.973308067Z" level=info msg="StartContainer for \"f3071383b183871110e341d6afa857441137d1aee11bc2243e0d34ed8ac5e0e9\" returns successfully" Mar 6 01:45:09.205203 containerd[1589]: time="2026-03-06T01:45:09.205062965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:09.206298 containerd[1589]: time="2026-03-06T01:45:09.206230732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 6 01:45:09.207525 containerd[1589]: time="2026-03-06T01:45:09.207486742Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:09.210356 containerd[1589]: time="2026-03-06T01:45:09.210276862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:09.211120 containerd[1589]: time="2026-03-06T01:45:09.211042030Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 665.025504ms" Mar 6 01:45:09.211120 containerd[1589]: time="2026-03-06T01:45:09.211084939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 6 01:45:09.212154 containerd[1589]: time="2026-03-06T01:45:09.212069983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 6 01:45:09.215476 containerd[1589]: time="2026-03-06T01:45:09.215364641Z" level=info msg="CreateContainer within sandbox \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 6 01:45:09.237244 containerd[1589]: time="2026-03-06T01:45:09.237130852Z" level=info msg="CreateContainer within sandbox \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\"" Mar 6 01:45:09.237871 containerd[1589]: time="2026-03-06T01:45:09.237817139Z" level=info msg="StartContainer for \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\"" Mar 6 01:45:09.365783 containerd[1589]: time="2026-03-06T01:45:09.365748796Z" level=info msg="StartContainer for \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\" returns successfully" Mar 6 01:45:09.486096 kubelet[2674]: E0306 01:45:09.485964 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:09.488352 kubelet[2674]: I0306 01:45:09.487036 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:45:09.488352 kubelet[2674]: E0306 01:45:09.487217 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:09.488352 kubelet[2674]: E0306 01:45:09.487623 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:09.488614 containerd[1589]: time="2026-03-06T01:45:09.488029310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-ms9pq,Uid:03806442-2813-4de9-9d55-5690b43be899,Namespace:calico-system,Attempt:0,}" Mar 6 01:45:09.488614 containerd[1589]: time="2026-03-06T01:45:09.488080567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t7pfc,Uid:9d37ef38-21de-4e05-9c50-3273af0abb2b,Namespace:kube-system,Attempt:0,}" Mar 6 01:45:09.517551 kubelet[2674]: I0306 01:45:09.515003 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8tj28" podStartSLOduration=25.514988312 podStartE2EDuration="25.514988312s" podCreationTimestamp="2026-03-06 01:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:45:09.510624617 +0000 UTC m=+28.577352754" watchObservedRunningTime="2026-03-06 01:45:09.514988312 +0000 UTC m=+28.581716449" Mar 6 01:45:09.516959 systemd[1]: run-netns-cni\x2d54ee95aa\x2d8b27\x2d35f6\x2d5e1a\x2df472b8992d3e.mount: Deactivated successfully. Mar 6 01:45:09.518080 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9364b98565bf0017acd01461b1b6924ab89e2554f4d487dc2f141f617c9afb0e-shm.mount: Deactivated successfully. Mar 6 01:45:09.518511 systemd[1]: run-netns-cni\x2d30ad771d\x2d782f\x2d8960\x2d7c07\x2db0300689c8cb.mount: Deactivated successfully. Mar 6 01:45:09.518897 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d34e36d3e0ed8d7416605f48de068b608d798573c4d3779dbe2dbc80660483c1-shm.mount: Deactivated successfully. Mar 6 01:45:09.747154 systemd-networkd[1250]: calia5a21eb9a4d: Link UP Mar 6 01:45:09.747579 systemd-networkd[1250]: calia5a21eb9a4d: Gained carrier Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.619 [ERROR][4178] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.634 [INFO][4178] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0 coredns-674b8bbfcf- kube-system 9d37ef38-21de-4e05-9c50-3273af0abb2b 916 0 2026-03-06 01:44:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-t7pfc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia5a21eb9a4d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-t7pfc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t7pfc-" Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.634 [INFO][4178] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-t7pfc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0" Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.685 [INFO][4208] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" HandleID="k8s-pod-network.fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" Workload="localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0" Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.699 [INFO][4208] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" HandleID="k8s-pod-network.fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" Workload="localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004f05f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-t7pfc", "timestamp":"2026-03-06 01:45:09.685695146 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00069a2c0)} Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.699 [INFO][4208] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.699 [INFO][4208] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.699 [INFO][4208] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.705 [INFO][4208] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" host="localhost" Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.710 [INFO][4208] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.715 [INFO][4208] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.717 [INFO][4208] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.723 [INFO][4208] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.723 [INFO][4208] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" host="localhost" Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.725 [INFO][4208] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7 Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.735 [INFO][4208] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" host="localhost" Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.741 [INFO][4208] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" host="localhost" Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.741 [INFO][4208] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" host="localhost" Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.741 [INFO][4208] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:45:09.773196 containerd[1589]: 2026-03-06 01:45:09.741 [INFO][4208] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" HandleID="k8s-pod-network.fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" Workload="localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0" Mar 6 01:45:09.774237 containerd[1589]: 2026-03-06 01:45:09.743 [INFO][4178] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-t7pfc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9d37ef38-21de-4e05-9c50-3273af0abb2b", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-t7pfc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5a21eb9a4d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:09.774237 containerd[1589]: 2026-03-06 01:45:09.744 [INFO][4178] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-t7pfc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0" Mar 6 01:45:09.774237 containerd[1589]: 2026-03-06 01:45:09.744 [INFO][4178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5a21eb9a4d ContainerID="fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-t7pfc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0" Mar 6 01:45:09.774237 containerd[1589]: 2026-03-06 01:45:09.747 [INFO][4178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-t7pfc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0" Mar 6 01:45:09.774237 containerd[1589]: 2026-03-06 01:45:09.748 [INFO][4178] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-t7pfc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9d37ef38-21de-4e05-9c50-3273af0abb2b", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7", Pod:"coredns-674b8bbfcf-t7pfc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5a21eb9a4d", MAC:"2e:3a:fa:44:50:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:09.774237 containerd[1589]: 2026-03-06 01:45:09.768 [INFO][4178] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-t7pfc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t7pfc-eth0" Mar 6 01:45:09.800807 containerd[1589]: time="2026-03-06T01:45:09.799496133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:45:09.800807 containerd[1589]: time="2026-03-06T01:45:09.800366764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:45:09.800807 containerd[1589]: time="2026-03-06T01:45:09.800413590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:09.802367 containerd[1589]: time="2026-03-06T01:45:09.800760939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:09.843818 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:45:09.863605 systemd-networkd[1250]: caliab211546836: Link UP Mar 6 01:45:09.865994 systemd-networkd[1250]: caliab211546836: Gained carrier Mar 6 01:45:09.884620 containerd[1589]: time="2026-03-06T01:45:09.884326040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t7pfc,Uid:9d37ef38-21de-4e05-9c50-3273af0abb2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7\"" Mar 6 01:45:09.886213 kubelet[2674]: E0306 01:45:09.886183 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.635 [ERROR][4189] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.649 [INFO][4189] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--ms9pq-eth0 goldmane-5b85766d88- calico-system 03806442-2813-4de9-9d55-5690b43be899 917 0 2026-03-06 01:44:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-ms9pq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliab211546836 [] [] }} ContainerID="6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" Namespace="calico-system" Pod="goldmane-5b85766d88-ms9pq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--ms9pq-" Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.649 [INFO][4189] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" Namespace="calico-system" Pod="goldmane-5b85766d88-ms9pq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--ms9pq-eth0" Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.699 [INFO][4214] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" HandleID="k8s-pod-network.6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" Workload="localhost-k8s-goldmane--5b85766d88--ms9pq-eth0" Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.706 [INFO][4214] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" HandleID="k8s-pod-network.6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" Workload="localhost-k8s-goldmane--5b85766d88--ms9pq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fc810), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-ms9pq", "timestamp":"2026-03-06 01:45:09.699425349 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00027a000)} Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.706 [INFO][4214] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.741 [INFO][4214] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.741 [INFO][4214] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.806 [INFO][4214] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" host="localhost" Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.814 [INFO][4214] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.820 [INFO][4214] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.824 [INFO][4214] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.828 [INFO][4214] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.828 [INFO][4214] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" host="localhost" Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.832 [INFO][4214] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.839 [INFO][4214] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" host="localhost" Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.852 [INFO][4214] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" host="localhost" Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.852 [INFO][4214] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" host="localhost" Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.852 [INFO][4214] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:45:09.896307 containerd[1589]: 2026-03-06 01:45:09.852 [INFO][4214] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" HandleID="k8s-pod-network.6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" Workload="localhost-k8s-goldmane--5b85766d88--ms9pq-eth0" Mar 6 01:45:09.897037 containerd[1589]: 2026-03-06 01:45:09.858 [INFO][4189] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" Namespace="calico-system" Pod="goldmane-5b85766d88-ms9pq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--ms9pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--ms9pq-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"03806442-2813-4de9-9d55-5690b43be899", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-ms9pq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliab211546836", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:09.897037 containerd[1589]: 2026-03-06 01:45:09.858 [INFO][4189] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" Namespace="calico-system" Pod="goldmane-5b85766d88-ms9pq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--ms9pq-eth0" Mar 6 01:45:09.897037 containerd[1589]: 2026-03-06 01:45:09.858 [INFO][4189] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab211546836 ContainerID="6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" Namespace="calico-system" Pod="goldmane-5b85766d88-ms9pq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--ms9pq-eth0" Mar 6 01:45:09.897037 containerd[1589]: 2026-03-06 01:45:09.867 [INFO][4189] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" Namespace="calico-system" Pod="goldmane-5b85766d88-ms9pq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--ms9pq-eth0" Mar 6 01:45:09.897037 containerd[1589]: 2026-03-06 01:45:09.868 [INFO][4189] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" Namespace="calico-system" Pod="goldmane-5b85766d88-ms9pq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--ms9pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--ms9pq-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"03806442-2813-4de9-9d55-5690b43be899", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 44, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b", Pod:"goldmane-5b85766d88-ms9pq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliab211546836", MAC:"42:e4:3d:79:dd:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:09.897037 containerd[1589]: 2026-03-06 01:45:09.885 [INFO][4189] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b" Namespace="calico-system" Pod="goldmane-5b85766d88-ms9pq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--ms9pq-eth0" Mar 6 01:45:09.898642 containerd[1589]: time="2026-03-06T01:45:09.898395920Z" level=info msg="CreateContainer within sandbox \"fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 01:45:09.925839 containerd[1589]: time="2026-03-06T01:45:09.925570587Z" level=info msg="CreateContainer within sandbox \"fc0664546fd219f82ce9c976367c9f38eaa5523f4e25067373238556d95d14c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"85504e25854255f16fe838dbe193c8dbc8c5bbb7dcb675b8932e8b7e6795cf5c\"" Mar 6 01:45:09.928392 containerd[1589]: time="2026-03-06T01:45:09.927294547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:45:09.928392 containerd[1589]: time="2026-03-06T01:45:09.927363895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:45:09.928392 containerd[1589]: time="2026-03-06T01:45:09.927403137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:09.928392 containerd[1589]: time="2026-03-06T01:45:09.927688242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:09.930404 containerd[1589]: time="2026-03-06T01:45:09.928872028Z" level=info msg="StartContainer for \"85504e25854255f16fe838dbe193c8dbc8c5bbb7dcb675b8932e8b7e6795cf5c\"" Mar 6 01:45:09.972819 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:45:10.032544 containerd[1589]: time="2026-03-06T01:45:10.032249601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-ms9pq,Uid:03806442-2813-4de9-9d55-5690b43be899,Namespace:calico-system,Attempt:0,} returns sandbox id \"6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b\"" Mar 6 01:45:10.091784 containerd[1589]: time="2026-03-06T01:45:10.091690376Z" level=info msg="StartContainer for \"85504e25854255f16fe838dbe193c8dbc8c5bbb7dcb675b8932e8b7e6795cf5c\" returns successfully" Mar 6 01:45:10.184822 systemd-networkd[1250]: cali7d613bb7238: Gained IPv6LL Mar 6 01:45:10.376962 systemd-networkd[1250]: calia2aed348f5f: Gained IPv6LL Mar 6 01:45:10.378172 systemd-networkd[1250]: cali570dc42febe: Gained IPv6LL Mar 6 01:45:10.441342 systemd-networkd[1250]: cali7cfeb59f89a: Gained IPv6LL Mar 6 01:45:10.496848 kubelet[2674]: E0306 01:45:10.495991 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:10.498625 kubelet[2674]: E0306 01:45:10.498522 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:10.505341 systemd-networkd[1250]: caliac8afc0c3a0: Gained IPv6LL Mar 6 01:45:10.524952 kubelet[2674]: I0306 01:45:10.523032 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t7pfc" podStartSLOduration=26.523010286999998 podStartE2EDuration="26.523010287s" podCreationTimestamp="2026-03-06 01:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:45:10.51896868 +0000 UTC m=+29.585696837" watchObservedRunningTime="2026-03-06 01:45:10.523010287 +0000 UTC m=+29.589738444" Mar 6 01:45:10.865518 kernel: calico-node[4408]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 6 01:45:10.889639 systemd-networkd[1250]: caliabee16c2d63: Gained IPv6LL Mar 6 01:45:11.208687 systemd-networkd[1250]: calia5a21eb9a4d: Gained IPv6LL Mar 6 01:45:11.336744 systemd-resolved[1472]: Under memory pressure, flushing caches. Mar 6 01:45:11.340808 systemd-journald[1171]: Under memory pressure, flushing caches. Mar 6 01:45:11.336791 systemd-resolved[1472]: Flushed all caches. Mar 6 01:45:11.446879 containerd[1589]: time="2026-03-06T01:45:11.446277851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:11.447703 containerd[1589]: time="2026-03-06T01:45:11.447431045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 6 01:45:11.450088 containerd[1589]: time="2026-03-06T01:45:11.449834082Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:11.464659 containerd[1589]: time="2026-03-06T01:45:11.464114580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:11.468364 containerd[1589]: time="2026-03-06T01:45:11.467086685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.254791298s" Mar 6 01:45:11.468364 containerd[1589]: time="2026-03-06T01:45:11.467136838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 6 01:45:11.471136 containerd[1589]: time="2026-03-06T01:45:11.469548074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 01:45:11.493082 containerd[1589]: time="2026-03-06T01:45:11.492986452Z" level=info msg="CreateContainer within sandbox \"869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 6 01:45:11.497956 kubelet[2674]: E0306 01:45:11.497877 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:45:11.517041 containerd[1589]: time="2026-03-06T01:45:11.516926897Z" level=info msg="CreateContainer within sandbox \"869707a182738677206c80076a775681a19df08a1685fbd8948c085da709accd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3ebd59fc6bfa5524bf19d5d7982ee26dabd491b6f53a514a6f8661db3985879f\"" Mar 6 01:45:11.521115 containerd[1589]: time="2026-03-06T01:45:11.519149620Z" level=info msg="StartContainer for \"3ebd59fc6bfa5524bf19d5d7982ee26dabd491b6f53a514a6f8661db3985879f\"" Mar 6 01:45:11.656762 systemd-networkd[1250]: caliab211546836: Gained IPv6LL Mar 6 01:45:11.685732 systemd-networkd[1250]: vxlan.calico: Link UP Mar 6 01:45:11.685743 systemd-networkd[1250]: vxlan.calico: Gained carrier Mar 6 01:45:11.723234 containerd[1589]: time="2026-03-06T01:45:11.722389486Z" level=info msg="StartContainer for \"3ebd59fc6bfa5524bf19d5d7982ee26dabd491b6f53a514a6f8661db3985879f\" returns successfully" Mar 6 01:45:12.526109 kubelet[2674]: I0306 01:45:12.525137 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c6d5dc557-wsvgg" podStartSLOduration=13.623066565 podStartE2EDuration="16.525122291s" podCreationTimestamp="2026-03-06 01:44:56 +0000 UTC" firstStartedPulling="2026-03-06 01:45:08.566202257 +0000 UTC m=+27.632930394" lastFinishedPulling="2026-03-06 01:45:11.468257973 +0000 UTC m=+30.534986120" observedRunningTime="2026-03-06 01:45:12.524996719 +0000 UTC m=+31.591724857" watchObservedRunningTime="2026-03-06 01:45:12.525122291 +0000 UTC m=+31.591850428" Mar 6 01:45:13.084139 containerd[1589]: time="2026-03-06T01:45:13.084050455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:13.085148 containerd[1589]: time="2026-03-06T01:45:13.085103018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 6 01:45:13.086145 containerd[1589]: time="2026-03-06T01:45:13.086108069Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:13.088870 containerd[1589]: time="2026-03-06T01:45:13.088781796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:13.089912 containerd[1589]: time="2026-03-06T01:45:13.089851332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.620271419s" Mar 6 01:45:13.089912 containerd[1589]: time="2026-03-06T01:45:13.089901835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 01:45:13.092001 containerd[1589]: time="2026-03-06T01:45:13.091942749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 01:45:13.095385 containerd[1589]: time="2026-03-06T01:45:13.095331354Z" level=info msg="CreateContainer within sandbox \"9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 01:45:13.120273 containerd[1589]: time="2026-03-06T01:45:13.120183887Z" level=info msg="CreateContainer within sandbox \"9e106a7e455d72c8a0a17f77771bba1e86e6622d2b77dee17b4e482862fedb70\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f2b0c9e47e088b1933aab03966fa2dd84243db03635ce569231d3671c3032659\"" Mar 6 01:45:13.121034 containerd[1589]: time="2026-03-06T01:45:13.120892685Z" level=info msg="StartContainer for \"f2b0c9e47e088b1933aab03966fa2dd84243db03635ce569231d3671c3032659\"" Mar 6 01:45:13.202039 containerd[1589]: time="2026-03-06T01:45:13.201976713Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:13.203019 containerd[1589]: time="2026-03-06T01:45:13.202973813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 6 01:45:13.205179 containerd[1589]: time="2026-03-06T01:45:13.205150626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 113.161221ms" Mar 6 01:45:13.205245 containerd[1589]: time="2026-03-06T01:45:13.205180601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 6 01:45:13.208305 containerd[1589]: time="2026-03-06T01:45:13.208266810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 6 01:45:13.214239 containerd[1589]: time="2026-03-06T01:45:13.212972761Z" level=info msg="CreateContainer within sandbox \"a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 01:45:13.214239 containerd[1589]: time="2026-03-06T01:45:13.214112021Z" level=info msg="StartContainer for \"f2b0c9e47e088b1933aab03966fa2dd84243db03635ce569231d3671c3032659\" returns successfully" Mar 6 01:45:13.239726 containerd[1589]: time="2026-03-06T01:45:13.239584635Z" level=info msg="CreateContainer within sandbox \"a7025d1b75745a47da65782711563154cfac7a90f668ac4956dd2c60e514c173\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4200d12622cbf46f78440f538027164e79b77bcc14013ec682d995e4e4566940\"" Mar 6 01:45:13.240259 containerd[1589]: time="2026-03-06T01:45:13.240239593Z" level=info msg="StartContainer for \"4200d12622cbf46f78440f538027164e79b77bcc14013ec682d995e4e4566940\"" Mar 6 01:45:13.347378 containerd[1589]: time="2026-03-06T01:45:13.347180959Z" level=info msg="StartContainer for \"4200d12622cbf46f78440f538027164e79b77bcc14013ec682d995e4e4566940\" returns successfully" Mar 6 01:45:13.385049 systemd-networkd[1250]: vxlan.calico: Gained IPv6LL Mar 6 01:45:13.522172 kubelet[2674]: I0306 01:45:13.522108 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:45:13.537496 kubelet[2674]: I0306 01:45:13.537371 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6b5896ddfd-x6rlz" podStartSLOduration=14.256245668 podStartE2EDuration="18.537359311s" podCreationTimestamp="2026-03-06 01:44:55 +0000 UTC" firstStartedPulling="2026-03-06 01:45:08.925294569 +0000 UTC m=+27.992022705" lastFinishedPulling="2026-03-06 01:45:13.20640821 +0000 UTC m=+32.273136348" observedRunningTime="2026-03-06 01:45:13.535010987 +0000 UTC m=+32.601739124" watchObservedRunningTime="2026-03-06 01:45:13.537359311 +0000 UTC m=+32.604087448" Mar 6 01:45:13.882155 containerd[1589]: time="2026-03-06T01:45:13.882100426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:13.883628 containerd[1589]: time="2026-03-06T01:45:13.883472097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 6 01:45:13.885136 containerd[1589]: time="2026-03-06T01:45:13.884874916Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:13.888523 containerd[1589]: time="2026-03-06T01:45:13.888497832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:13.889604 containerd[1589]: time="2026-03-06T01:45:13.889516070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 681.206742ms" Mar 6 01:45:13.889604 containerd[1589]: time="2026-03-06T01:45:13.889557256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 6 01:45:13.892891 containerd[1589]: time="2026-03-06T01:45:13.892856604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 6 01:45:13.897491 containerd[1589]: time="2026-03-06T01:45:13.897409928Z" level=info msg="CreateContainer within sandbox \"018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 6 01:45:13.926140 containerd[1589]: time="2026-03-06T01:45:13.926074088Z" level=info msg="CreateContainer within sandbox \"018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"aed63446b80cd1e3d47db38d6b115005e75b967a45558f8b13398d677fe7c05e\"" Mar 6 01:45:13.926879 containerd[1589]: time="2026-03-06T01:45:13.926826862Z" level=info msg="StartContainer for \"aed63446b80cd1e3d47db38d6b115005e75b967a45558f8b13398d677fe7c05e\"" Mar 6 01:45:14.029069 containerd[1589]: time="2026-03-06T01:45:14.028889353Z" level=info msg="StartContainer for \"aed63446b80cd1e3d47db38d6b115005e75b967a45558f8b13398d677fe7c05e\" returns successfully" Mar 6 01:45:14.115900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437385385.mount: Deactivated successfully. Mar 6 01:45:14.553332 kubelet[2674]: I0306 01:45:14.553174 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:45:14.556833 kubelet[2674]: I0306 01:45:14.556692 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:45:14.911842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2265500563.mount: Deactivated successfully. Mar 6 01:45:14.938323 containerd[1589]: time="2026-03-06T01:45:14.938160287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:14.939600 containerd[1589]: time="2026-03-06T01:45:14.939484262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 6 01:45:14.941159 containerd[1589]: time="2026-03-06T01:45:14.940936048Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:14.965287 containerd[1589]: time="2026-03-06T01:45:14.945112137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:14.965432 containerd[1589]: time="2026-03-06T01:45:14.945850716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.052955231s" Mar 6 01:45:14.965524 containerd[1589]: time="2026-03-06T01:45:14.965472702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 6 01:45:14.968480 containerd[1589]: time="2026-03-06T01:45:14.968095042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 6 01:45:14.973205 containerd[1589]: time="2026-03-06T01:45:14.973035435Z" level=info msg="CreateContainer within sandbox \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 6 01:45:14.993201 containerd[1589]: time="2026-03-06T01:45:14.993167554Z" level=info msg="CreateContainer within sandbox \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\"" Mar 6 01:45:14.994382 containerd[1589]: time="2026-03-06T01:45:14.994296690Z" level=info msg="StartContainer for \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\"" Mar 6 01:45:15.122105 containerd[1589]: time="2026-03-06T01:45:15.122035001Z" level=info msg="StartContainer for \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\" returns successfully" Mar 6 01:45:15.600701 containerd[1589]: time="2026-03-06T01:45:15.599209040Z" level=info msg="StopContainer for \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\" with timeout 30 (s)" Mar 6 01:45:15.600701 containerd[1589]: time="2026-03-06T01:45:15.599209445Z" level=info msg="StopContainer for \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\" with timeout 30 (s)" Mar 6 01:45:15.603891 kubelet[2674]: I0306 01:45:15.603809 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6546896c4d-b56z6" podStartSLOduration=11.181404065 podStartE2EDuration="17.603786008s" podCreationTimestamp="2026-03-06 01:44:58 +0000 UTC" firstStartedPulling="2026-03-06 01:45:08.545586436 +0000 UTC m=+27.612314574" lastFinishedPulling="2026-03-06 01:45:14.96796838 +0000 UTC m=+34.034696517" observedRunningTime="2026-03-06 01:45:15.600830787 +0000 UTC m=+34.667558944" watchObservedRunningTime="2026-03-06 01:45:15.603786008 +0000 UTC m=+34.670514275" Mar 6 01:45:15.606705 kubelet[2674]: I0306 01:45:15.606169 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6b5896ddfd-dq4x8" podStartSLOduration=16.183335607 podStartE2EDuration="20.606158522s" podCreationTimestamp="2026-03-06 01:44:55 +0000 UTC" firstStartedPulling="2026-03-06 01:45:08.667994564 +0000 UTC m=+27.734722701" lastFinishedPulling="2026-03-06 01:45:13.090817479 +0000 UTC m=+32.157545616" observedRunningTime="2026-03-06 01:45:13.553130976 +0000 UTC m=+32.619859113" watchObservedRunningTime="2026-03-06 01:45:15.606158522 +0000 UTC m=+34.672886659" Mar 6 01:45:15.612678 containerd[1589]: time="2026-03-06T01:45:15.611047818Z" level=info msg="Stop container \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\" with signal terminated" Mar 6 01:45:15.612678 containerd[1589]: time="2026-03-06T01:45:15.611264740Z" level=info msg="Stop container \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\" with signal terminated" Mar 6 01:45:15.802288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1-rootfs.mount: Deactivated successfully. Mar 6 01:45:15.802611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e-rootfs.mount: Deactivated successfully. Mar 6 01:45:15.818800 containerd[1589]: time="2026-03-06T01:45:15.805536902Z" level=info msg="shim disconnected" id=7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e namespace=k8s.io Mar 6 01:45:15.818800 containerd[1589]: time="2026-03-06T01:45:15.818783782Z" level=warning msg="cleaning up after shim disconnected" id=7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e namespace=k8s.io Mar 6 01:45:15.818800 containerd[1589]: time="2026-03-06T01:45:15.818805242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:45:15.836210 containerd[1589]: time="2026-03-06T01:45:15.836119838Z" level=info msg="shim disconnected" id=c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1 namespace=k8s.io Mar 6 01:45:15.836210 containerd[1589]: time="2026-03-06T01:45:15.836183256Z" level=warning msg="cleaning up after shim disconnected" id=c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1 namespace=k8s.io Mar 6 01:45:15.836210 containerd[1589]: time="2026-03-06T01:45:15.836194347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:45:15.894033 containerd[1589]: time="2026-03-06T01:45:15.893815843Z" level=info msg="StopContainer for \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\" returns successfully" Mar 6 01:45:15.900766 containerd[1589]: time="2026-03-06T01:45:15.900673454Z" level=info msg="StopContainer for \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\" returns successfully" Mar 6 01:45:15.907369 containerd[1589]: time="2026-03-06T01:45:15.907068563Z" level=info msg="StopPodSandbox for \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\"" Mar 6 01:45:15.907369 containerd[1589]: time="2026-03-06T01:45:15.907124848Z" level=info msg="Container to stop \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 01:45:15.907369 containerd[1589]: time="2026-03-06T01:45:15.907137761Z" level=info msg="Container to stop \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 6 01:45:15.911208 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d-shm.mount: Deactivated successfully. Mar 6 01:45:15.955739 containerd[1589]: time="2026-03-06T01:45:15.955564210Z" level=info msg="shim disconnected" id=93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d namespace=k8s.io Mar 6 01:45:15.955739 containerd[1589]: time="2026-03-06T01:45:15.955669815Z" level=warning msg="cleaning up after shim disconnected" id=93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d namespace=k8s.io Mar 6 01:45:15.955739 containerd[1589]: time="2026-03-06T01:45:15.955681476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 6 01:45:15.978525 containerd[1589]: time="2026-03-06T01:45:15.977658724Z" level=warning msg="cleanup warnings time=\"2026-03-06T01:45:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 6 01:45:16.103600 systemd-networkd[1250]: calia2aed348f5f: Link DOWN Mar 6 01:45:16.104243 systemd-networkd[1250]: calia2aed348f5f: Lost carrier Mar 6 01:45:16.112759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d-rootfs.mount: Deactivated successfully. Mar 6 01:45:16.239554 containerd[1589]: 2026-03-06 01:45:16.095 [INFO][4936] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Mar 6 01:45:16.239554 containerd[1589]: 2026-03-06 01:45:16.097 [INFO][4936] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" iface="eth0" netns="/var/run/netns/cni-d3e1ebc9-b7e2-296e-48c4-a656b31e2454" Mar 6 01:45:16.239554 containerd[1589]: 2026-03-06 01:45:16.097 [INFO][4936] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" iface="eth0" netns="/var/run/netns/cni-d3e1ebc9-b7e2-296e-48c4-a656b31e2454" Mar 6 01:45:16.239554 containerd[1589]: 2026-03-06 01:45:16.118 [INFO][4936] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" after=21.094518ms iface="eth0" netns="/var/run/netns/cni-d3e1ebc9-b7e2-296e-48c4-a656b31e2454" Mar 6 01:45:16.239554 containerd[1589]: 2026-03-06 01:45:16.118 [INFO][4936] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Mar 6 01:45:16.239554 containerd[1589]: 2026-03-06 01:45:16.118 [INFO][4936] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Mar 6 01:45:16.239554 containerd[1589]: 2026-03-06 01:45:16.168 [INFO][4953] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" HandleID="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Workload="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:16.239554 containerd[1589]: 2026-03-06 01:45:16.169 [INFO][4953] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:45:16.239554 containerd[1589]: 2026-03-06 01:45:16.169 [INFO][4953] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:45:16.239554 containerd[1589]: 2026-03-06 01:45:16.229 [INFO][4953] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" HandleID="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Workload="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:16.239554 containerd[1589]: 2026-03-06 01:45:16.229 [INFO][4953] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" HandleID="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Workload="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:16.239554 containerd[1589]: 2026-03-06 01:45:16.231 [INFO][4953] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:45:16.239554 containerd[1589]: 2026-03-06 01:45:16.235 [INFO][4936] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Mar 6 01:45:16.240716 containerd[1589]: time="2026-03-06T01:45:16.239962643Z" level=info msg="TearDown network for sandbox \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\" successfully" Mar 6 01:45:16.240716 containerd[1589]: time="2026-03-06T01:45:16.239990065Z" level=info msg="StopPodSandbox for \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\" returns successfully" Mar 6 01:45:16.248177 systemd[1]: run-netns-cni\x2dd3e1ebc9\x2db7e2\x2d296e\x2d48c4\x2da656b31e2454.mount: Deactivated successfully. Mar 6 01:45:16.383296 kubelet[2674]: I0306 01:45:16.383257 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-whisker-ca-bundle\") pod \"d60390ac-fb3e-4a75-adc8-f0b708d60ef9\" (UID: \"d60390ac-fb3e-4a75-adc8-f0b708d60ef9\") " Mar 6 01:45:16.383674 kubelet[2674]: I0306 01:45:16.383602 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpg5l\" (UniqueName: \"kubernetes.io/projected/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-kube-api-access-bpg5l\") pod \"d60390ac-fb3e-4a75-adc8-f0b708d60ef9\" (UID: \"d60390ac-fb3e-4a75-adc8-f0b708d60ef9\") " Mar 6 01:45:16.383674 kubelet[2674]: I0306 01:45:16.383644 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-whisker-backend-key-pair\") pod \"d60390ac-fb3e-4a75-adc8-f0b708d60ef9\" (UID: \"d60390ac-fb3e-4a75-adc8-f0b708d60ef9\") " Mar 6 01:45:16.383674 kubelet[2674]: I0306 01:45:16.383665 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-nginx-config\") pod \"d60390ac-fb3e-4a75-adc8-f0b708d60ef9\" (UID: \"d60390ac-fb3e-4a75-adc8-f0b708d60ef9\") " Mar 6 01:45:16.384893 kubelet[2674]: I0306 01:45:16.384225 2674 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "d60390ac-fb3e-4a75-adc8-f0b708d60ef9" (UID: "d60390ac-fb3e-4a75-adc8-f0b708d60ef9"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 01:45:16.385134 kubelet[2674]: I0306 01:45:16.385104 2674 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d60390ac-fb3e-4a75-adc8-f0b708d60ef9" (UID: "d60390ac-fb3e-4a75-adc8-f0b708d60ef9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 01:45:16.393669 kubelet[2674]: I0306 01:45:16.393614 2674 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d60390ac-fb3e-4a75-adc8-f0b708d60ef9" (UID: "d60390ac-fb3e-4a75-adc8-f0b708d60ef9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 01:45:16.393993 kubelet[2674]: I0306 01:45:16.393791 2674 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-kube-api-access-bpg5l" (OuterVolumeSpecName: "kube-api-access-bpg5l") pod "d60390ac-fb3e-4a75-adc8-f0b708d60ef9" (UID: "d60390ac-fb3e-4a75-adc8-f0b708d60ef9"). InnerVolumeSpecName "kube-api-access-bpg5l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 01:45:16.395243 systemd[1]: var-lib-kubelet-pods-d60390ac\x2dfb3e\x2d4a75\x2dadc8\x2df0b708d60ef9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbpg5l.mount: Deactivated successfully. Mar 6 01:45:16.395512 systemd[1]: var-lib-kubelet-pods-d60390ac\x2dfb3e\x2d4a75\x2dadc8\x2df0b708d60ef9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 6 01:45:16.484588 kubelet[2674]: I0306 01:45:16.484421 2674 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:16.484588 kubelet[2674]: I0306 01:45:16.484485 2674 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:16.484588 kubelet[2674]: I0306 01:45:16.484502 2674 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bpg5l\" (UniqueName: \"kubernetes.io/projected/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-kube-api-access-bpg5l\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:16.484588 kubelet[2674]: I0306 01:45:16.484511 2674 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d60390ac-fb3e-4a75-adc8-f0b708d60ef9-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 6 01:45:16.573528 kubelet[2674]: I0306 01:45:16.573238 2674 scope.go:117] "RemoveContainer" containerID="c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1" Mar 6 01:45:16.576936 containerd[1589]: time="2026-03-06T01:45:16.576600772Z" level=info msg="RemoveContainer for \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\"" Mar 6 01:45:16.584984 containerd[1589]: time="2026-03-06T01:45:16.584600891Z" level=info msg="RemoveContainer for \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\" returns successfully" Mar 6 01:45:16.586425 kubelet[2674]: I0306 01:45:16.586340 2674 scope.go:117] "RemoveContainer" containerID="7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e" Mar 6 01:45:16.588415 containerd[1589]: time="2026-03-06T01:45:16.588065161Z" level=info msg="RemoveContainer for \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\"" Mar 6 01:45:16.594205 containerd[1589]: time="2026-03-06T01:45:16.594128168Z" level=info msg="RemoveContainer for \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\" returns successfully" Mar 6 01:45:16.594749 kubelet[2674]: I0306 01:45:16.594640 2674 scope.go:117] "RemoveContainer" containerID="c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1" Mar 6 01:45:16.627838 containerd[1589]: time="2026-03-06T01:45:16.602921236Z" level=error msg="ContainerStatus for \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\": not found" Mar 6 01:45:16.635241 kubelet[2674]: E0306 01:45:16.635166 2674 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\": not found" containerID="c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1" Mar 6 01:45:16.635982 kubelet[2674]: I0306 01:45:16.635363 2674 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1"} err="failed to get container status \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\": not found" Mar 6 01:45:16.635982 kubelet[2674]: I0306 01:45:16.635554 2674 scope.go:117] "RemoveContainer" containerID="7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e" Mar 6 01:45:16.637211 containerd[1589]: time="2026-03-06T01:45:16.637045369Z" level=error msg="ContainerStatus for \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\": not found" Mar 6 01:45:16.637780 kubelet[2674]: E0306 01:45:16.637722 2674 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\": not found" containerID="7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e" Mar 6 01:45:16.638043 kubelet[2674]: I0306 01:45:16.637761 2674 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e"} err="failed to get container status \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\": not found" Mar 6 01:45:16.638043 kubelet[2674]: I0306 01:45:16.637852 2674 scope.go:117] "RemoveContainer" containerID="c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1" Mar 6 01:45:16.639233 containerd[1589]: time="2026-03-06T01:45:16.639009882Z" level=error msg="ContainerStatus for \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\": not found" Mar 6 01:45:16.640297 kubelet[2674]: I0306 01:45:16.640200 2674 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1"} err="failed to get container status \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"c159cc1c9a52bd9638407e45901c1e7c28a37d0d00dc107eef07c0da0d5033a1\": not found" Mar 6 01:45:16.640297 kubelet[2674]: I0306 01:45:16.640246 2674 scope.go:117] "RemoveContainer" containerID="7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e" Mar 6 01:45:16.641650 kubelet[2674]: I0306 01:45:16.640802 2674 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e"} err="failed to get container status \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\": not found" Mar 6 01:45:16.641773 containerd[1589]: time="2026-03-06T01:45:16.640526232Z" level=error msg="ContainerStatus for \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d6d66d34f83360c36d97d33f78c15fdcc9b0ae18e45913dca80a5a4fe71721e\": not found" Mar 6 01:45:16.788137 kubelet[2674]: I0306 01:45:16.788078 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f76118a9-ec87-4e2c-9825-a0f1f6355170-whisker-backend-key-pair\") pod \"whisker-7d696fd6c7-mht4r\" (UID: \"f76118a9-ec87-4e2c-9825-a0f1f6355170\") " pod="calico-system/whisker-7d696fd6c7-mht4r" Mar 6 01:45:16.788137 kubelet[2674]: I0306 01:45:16.788121 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g92vf\" (UniqueName: \"kubernetes.io/projected/f76118a9-ec87-4e2c-9825-a0f1f6355170-kube-api-access-g92vf\") pod \"whisker-7d696fd6c7-mht4r\" (UID: \"f76118a9-ec87-4e2c-9825-a0f1f6355170\") " pod="calico-system/whisker-7d696fd6c7-mht4r" Mar 6 01:45:16.788137 kubelet[2674]: I0306 01:45:16.788142 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f76118a9-ec87-4e2c-9825-a0f1f6355170-whisker-ca-bundle\") pod \"whisker-7d696fd6c7-mht4r\" (UID: \"f76118a9-ec87-4e2c-9825-a0f1f6355170\") " pod="calico-system/whisker-7d696fd6c7-mht4r" Mar 6 01:45:16.788507 kubelet[2674]: I0306 01:45:16.788158 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f76118a9-ec87-4e2c-9825-a0f1f6355170-nginx-config\") pod \"whisker-7d696fd6c7-mht4r\" (UID: \"f76118a9-ec87-4e2c-9825-a0f1f6355170\") " pod="calico-system/whisker-7d696fd6c7-mht4r" Mar 6 01:45:16.801786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314854580.mount: Deactivated successfully. Mar 6 01:45:17.012520 containerd[1589]: time="2026-03-06T01:45:17.012306008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d696fd6c7-mht4r,Uid:f76118a9-ec87-4e2c-9825-a0f1f6355170,Namespace:calico-system,Attempt:0,}" Mar 6 01:45:17.206139 kubelet[2674]: I0306 01:45:17.205570 2674 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d60390ac-fb3e-4a75-adc8-f0b708d60ef9" path="/var/lib/kubelet/pods/d60390ac-fb3e-4a75-adc8-f0b708d60ef9/volumes" Mar 6 01:45:17.234621 systemd-networkd[1250]: cali0019ddf7ca9: Link UP Mar 6 01:45:17.238626 systemd-networkd[1250]: cali0019ddf7ca9: Gained carrier Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.088 [INFO][4981] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7d696fd6c7--mht4r-eth0 whisker-7d696fd6c7- calico-system f76118a9-ec87-4e2c-9825-a0f1f6355170 1065 0 2026-03-06 01:45:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7d696fd6c7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7d696fd6c7-mht4r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0019ddf7ca9 [] [] }} ContainerID="341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" Namespace="calico-system" Pod="whisker-7d696fd6c7-mht4r" WorkloadEndpoint="localhost-k8s-whisker--7d696fd6c7--mht4r-" Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.088 [INFO][4981] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" Namespace="calico-system" Pod="whisker-7d696fd6c7-mht4r" WorkloadEndpoint="localhost-k8s-whisker--7d696fd6c7--mht4r-eth0" Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.154 [INFO][4996] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" HandleID="k8s-pod-network.341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" Workload="localhost-k8s-whisker--7d696fd6c7--mht4r-eth0" Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.165 [INFO][4996] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" HandleID="k8s-pod-network.341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" Workload="localhost-k8s-whisker--7d696fd6c7--mht4r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7d696fd6c7-mht4r", "timestamp":"2026-03-06 01:45:17.154219239 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002cab00)} Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.166 [INFO][4996] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.166 [INFO][4996] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.166 [INFO][4996] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.169 [INFO][4996] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" host="localhost" Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.175 [INFO][4996] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.182 [INFO][4996] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.185 [INFO][4996] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.194 [INFO][4996] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.194 [INFO][4996] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" host="localhost" Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.199 [INFO][4996] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6 Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.207 [INFO][4996] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" host="localhost" Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.219 [INFO][4996] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" host="localhost" Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.219 [INFO][4996] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" host="localhost" Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.219 [INFO][4996] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:45:17.255902 containerd[1589]: 2026-03-06 01:45:17.219 [INFO][4996] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" HandleID="k8s-pod-network.341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" Workload="localhost-k8s-whisker--7d696fd6c7--mht4r-eth0" Mar 6 01:45:17.256895 containerd[1589]: 2026-03-06 01:45:17.224 [INFO][4981] cni-plugin/k8s.go 418: Populated endpoint ContainerID="341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" Namespace="calico-system" Pod="whisker-7d696fd6c7-mht4r" WorkloadEndpoint="localhost-k8s-whisker--7d696fd6c7--mht4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7d696fd6c7--mht4r-eth0", GenerateName:"whisker-7d696fd6c7-", Namespace:"calico-system", SelfLink:"", UID:"f76118a9-ec87-4e2c-9825-a0f1f6355170", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 45, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d696fd6c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7d696fd6c7-mht4r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0019ddf7ca9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:17.256895 containerd[1589]: 2026-03-06 01:45:17.225 [INFO][4981] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" Namespace="calico-system" Pod="whisker-7d696fd6c7-mht4r" WorkloadEndpoint="localhost-k8s-whisker--7d696fd6c7--mht4r-eth0" Mar 6 01:45:17.256895 containerd[1589]: 2026-03-06 01:45:17.225 [INFO][4981] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0019ddf7ca9 ContainerID="341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" Namespace="calico-system" Pod="whisker-7d696fd6c7-mht4r" WorkloadEndpoint="localhost-k8s-whisker--7d696fd6c7--mht4r-eth0" Mar 6 01:45:17.256895 containerd[1589]: 2026-03-06 01:45:17.235 [INFO][4981] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" Namespace="calico-system" Pod="whisker-7d696fd6c7-mht4r" WorkloadEndpoint="localhost-k8s-whisker--7d696fd6c7--mht4r-eth0" Mar 6 01:45:17.256895 containerd[1589]: 2026-03-06 01:45:17.239 [INFO][4981] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" Namespace="calico-system" Pod="whisker-7d696fd6c7-mht4r" WorkloadEndpoint="localhost-k8s-whisker--7d696fd6c7--mht4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7d696fd6c7--mht4r-eth0", GenerateName:"whisker-7d696fd6c7-", Namespace:"calico-system", SelfLink:"", UID:"f76118a9-ec87-4e2c-9825-a0f1f6355170", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 1, 45, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7d696fd6c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6", Pod:"whisker-7d696fd6c7-mht4r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0019ddf7ca9", MAC:"ae:82:72:f3:cc:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 01:45:17.256895 containerd[1589]: 2026-03-06 01:45:17.251 [INFO][4981] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6" Namespace="calico-system" Pod="whisker-7d696fd6c7-mht4r" WorkloadEndpoint="localhost-k8s-whisker--7d696fd6c7--mht4r-eth0" Mar 6 01:45:17.322992 containerd[1589]: time="2026-03-06T01:45:17.322654576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 6 01:45:17.322992 containerd[1589]: time="2026-03-06T01:45:17.322720828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 6 01:45:17.323394 containerd[1589]: time="2026-03-06T01:45:17.323259082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:17.323612 containerd[1589]: time="2026-03-06T01:45:17.323516016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 6 01:45:17.354912 systemd-resolved[1472]: Under memory pressure, flushing caches. Mar 6 01:45:17.359521 systemd-journald[1171]: Under memory pressure, flushing caches. Mar 6 01:45:17.354941 systemd-resolved[1472]: Flushed all caches. Mar 6 01:45:17.387964 containerd[1589]: time="2026-03-06T01:45:17.387814521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:17.389992 systemd-resolved[1472]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 6 01:45:17.391183 containerd[1589]: time="2026-03-06T01:45:17.391069213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 6 01:45:17.393154 containerd[1589]: time="2026-03-06T01:45:17.392582433Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:17.398505 containerd[1589]: time="2026-03-06T01:45:17.398418161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:17.400186 containerd[1589]: time="2026-03-06T01:45:17.400029683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.431849244s" Mar 6 01:45:17.400186 containerd[1589]: time="2026-03-06T01:45:17.400112256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 6 01:45:17.403033 containerd[1589]: time="2026-03-06T01:45:17.402976886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 6 01:45:17.407868 containerd[1589]: time="2026-03-06T01:45:17.407796947Z" level=info msg="CreateContainer within sandbox \"6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 6 01:45:17.434285 containerd[1589]: time="2026-03-06T01:45:17.434074428Z" level=info msg="CreateContainer within sandbox \"6859169ccd0b656113ba49a72a364fba3b07831c20027fdd91025f2c74017c5b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bbca54c2ade9e089403ca6bf8fd4a877aff414d79a6db7611412b26ab492aac3\"" Mar 6 01:45:17.435615 containerd[1589]: time="2026-03-06T01:45:17.435559352Z" level=info msg="StartContainer for \"bbca54c2ade9e089403ca6bf8fd4a877aff414d79a6db7611412b26ab492aac3\"" Mar 6 01:45:17.437313 containerd[1589]: time="2026-03-06T01:45:17.437236328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d696fd6c7-mht4r,Uid:f76118a9-ec87-4e2c-9825-a0f1f6355170,Namespace:calico-system,Attempt:0,} returns sandbox id \"341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6\"" Mar 6 01:45:17.443213 containerd[1589]: time="2026-03-06T01:45:17.443024047Z" level=info msg="CreateContainer within sandbox \"341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 6 01:45:17.491549 containerd[1589]: time="2026-03-06T01:45:17.491393041Z" level=info msg="CreateContainer within sandbox \"341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"82dde11e50cdd5da8dbec48b02a78294ca2e023eac931130c8e7b7774f9c5adb\"" Mar 6 01:45:17.492146 containerd[1589]: time="2026-03-06T01:45:17.492091934Z" level=info msg="StartContainer for \"82dde11e50cdd5da8dbec48b02a78294ca2e023eac931130c8e7b7774f9c5adb\"" Mar 6 01:45:17.551024 containerd[1589]: time="2026-03-06T01:45:17.550907125Z" level=info msg="StartContainer for \"bbca54c2ade9e089403ca6bf8fd4a877aff414d79a6db7611412b26ab492aac3\" returns successfully" Mar 6 01:45:17.607551 kubelet[2674]: I0306 01:45:17.607044 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-ms9pq" podStartSLOduration=15.239447075 podStartE2EDuration="22.60702795s" podCreationTimestamp="2026-03-06 01:44:55 +0000 UTC" firstStartedPulling="2026-03-06 01:45:10.03529123 +0000 UTC m=+29.102019367" lastFinishedPulling="2026-03-06 01:45:17.402872085 +0000 UTC m=+36.469600242" observedRunningTime="2026-03-06 01:45:17.605664848 +0000 UTC m=+36.672392985" watchObservedRunningTime="2026-03-06 01:45:17.60702795 +0000 UTC m=+36.673756087" Mar 6 01:45:17.630739 containerd[1589]: time="2026-03-06T01:45:17.630640360Z" level=info msg="StartContainer for \"82dde11e50cdd5da8dbec48b02a78294ca2e023eac931130c8e7b7774f9c5adb\" returns successfully" Mar 6 01:45:17.637717 containerd[1589]: time="2026-03-06T01:45:17.637625395Z" level=info msg="CreateContainer within sandbox \"341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 6 01:45:17.669251 containerd[1589]: time="2026-03-06T01:45:17.669127038Z" level=info msg="CreateContainer within sandbox \"341ef5c253d060cb39fab8554a70b58f04818b1422fc32e8037bfc2fc00375b6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ca874aad50cbcf0b4a4f7a77c0ad693b8fe5cdc7f6088d0ba7aea4c62af4a14a\"" Mar 6 01:45:17.670019 containerd[1589]: time="2026-03-06T01:45:17.669912430Z" level=info msg="StartContainer for \"ca874aad50cbcf0b4a4f7a77c0ad693b8fe5cdc7f6088d0ba7aea4c62af4a14a\"" Mar 6 01:45:17.808323 containerd[1589]: time="2026-03-06T01:45:17.808215858Z" level=info msg="StartContainer for \"ca874aad50cbcf0b4a4f7a77c0ad693b8fe5cdc7f6088d0ba7aea4c62af4a14a\" returns successfully" Mar 6 01:45:18.167489 containerd[1589]: time="2026-03-06T01:45:18.167368244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:18.168521 containerd[1589]: time="2026-03-06T01:45:18.168416394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 6 01:45:18.171483 containerd[1589]: time="2026-03-06T01:45:18.170126488Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:18.174398 containerd[1589]: time="2026-03-06T01:45:18.174328399Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 771.30665ms" Mar 6 01:45:18.174496 containerd[1589]: time="2026-03-06T01:45:18.174400333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 6 01:45:18.175044 containerd[1589]: time="2026-03-06T01:45:18.174961669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 01:45:18.179308 containerd[1589]: time="2026-03-06T01:45:18.179257402Z" level=info msg="CreateContainer within sandbox \"018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 6 01:45:18.196044 containerd[1589]: time="2026-03-06T01:45:18.195964355Z" level=info msg="CreateContainer within sandbox \"018d17924c11219bae1b3d8447dae3af7fed408777ab9b36bed992dfd49599fc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"156c22b464499eac482e8bd4df852f385ead9e4db4e36f2740005bc491ed1ec6\"" Mar 6 01:45:18.197182 containerd[1589]: time="2026-03-06T01:45:18.197153278Z" level=info msg="StartContainer for \"156c22b464499eac482e8bd4df852f385ead9e4db4e36f2740005bc491ed1ec6\"" Mar 6 01:45:18.284079 containerd[1589]: time="2026-03-06T01:45:18.284007082Z" level=info msg="StartContainer for \"156c22b464499eac482e8bd4df852f385ead9e4db4e36f2740005bc491ed1ec6\" returns successfully" Mar 6 01:45:18.401244 kubelet[2674]: I0306 01:45:18.401135 2674 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 6 01:45:18.402199 kubelet[2674]: I0306 01:45:18.402144 2674 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 6 01:45:18.597026 kubelet[2674]: I0306 01:45:18.596980 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:45:18.607035 kubelet[2674]: I0306 01:45:18.606937 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rb5xg" podStartSLOduration=13.400560975 podStartE2EDuration="22.606924458s" podCreationTimestamp="2026-03-06 01:44:56 +0000 UTC" firstStartedPulling="2026-03-06 01:45:08.968800173 +0000 UTC m=+28.035528310" lastFinishedPulling="2026-03-06 01:45:18.175163656 +0000 UTC m=+37.241891793" observedRunningTime="2026-03-06 01:45:18.605564405 +0000 UTC m=+37.672292542" watchObservedRunningTime="2026-03-06 01:45:18.606924458 +0000 UTC m=+37.673652595" Mar 6 01:45:18.616242 kubelet[2674]: I0306 01:45:18.616172 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7d696fd6c7-mht4r" podStartSLOduration=2.616159658 podStartE2EDuration="2.616159658s" podCreationTimestamp="2026-03-06 01:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 01:45:18.6147947 +0000 UTC m=+37.681522838" watchObservedRunningTime="2026-03-06 01:45:18.616159658 +0000 UTC m=+37.682887795" Mar 6 01:45:19.209119 systemd-networkd[1250]: cali0019ddf7ca9: Gained IPv6LL Mar 6 01:45:23.956872 kubelet[2674]: I0306 01:45:23.956144 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:45:24.109508 systemd[1]: run-containerd-runc-k8s.io-864338c59eb7c26fb89a4f8ec0bd00df9b491172e530bf646fb5532603d18f45-runc.eHjmJG.mount: Deactivated successfully. Mar 6 01:45:34.303798 kubelet[2674]: I0306 01:45:34.303565 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:45:36.519743 systemd[1]: Started sshd@7-10.0.0.144:22-10.0.0.1:34888.service - OpenSSH per-connection server daemon (10.0.0.1:34888). Mar 6 01:45:36.596589 sshd[5404]: Accepted publickey for core from 10.0.0.1 port 34888 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:45:36.599538 sshd[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:36.607428 systemd-logind[1567]: New session 8 of user core. Mar 6 01:45:36.614908 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 01:45:36.955985 kubelet[2674]: I0306 01:45:36.955931 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:45:37.080335 sshd[5404]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:37.085912 systemd[1]: sshd@7-10.0.0.144:22-10.0.0.1:34888.service: Deactivated successfully. Mar 6 01:45:37.091541 systemd-logind[1567]: Session 8 logged out. Waiting for processes to exit. Mar 6 01:45:37.092019 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 01:45:37.094140 systemd-logind[1567]: Removed session 8. Mar 6 01:45:37.399593 kubelet[2674]: I0306 01:45:37.399408 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:45:37.427120 systemd[1]: run-containerd-runc-k8s.io-bbca54c2ade9e089403ca6bf8fd4a877aff414d79a6db7611412b26ab492aac3-runc.rSqTin.mount: Deactivated successfully. Mar 6 01:45:41.096956 containerd[1589]: time="2026-03-06T01:45:41.096860163Z" level=info msg="StopPodSandbox for \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\"" Mar 6 01:45:41.232975 containerd[1589]: 2026-03-06 01:45:41.160 [WARNING][5487] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" WorkloadEndpoint="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:41.232975 containerd[1589]: 2026-03-06 01:45:41.160 [INFO][5487] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Mar 6 01:45:41.232975 containerd[1589]: 2026-03-06 01:45:41.160 [INFO][5487] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" iface="eth0" netns="" Mar 6 01:45:41.232975 containerd[1589]: 2026-03-06 01:45:41.160 [INFO][5487] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Mar 6 01:45:41.232975 containerd[1589]: 2026-03-06 01:45:41.160 [INFO][5487] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Mar 6 01:45:41.232975 containerd[1589]: 2026-03-06 01:45:41.217 [INFO][5496] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" HandleID="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Workload="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:41.232975 containerd[1589]: 2026-03-06 01:45:41.218 [INFO][5496] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:45:41.232975 containerd[1589]: 2026-03-06 01:45:41.218 [INFO][5496] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:45:41.232975 containerd[1589]: 2026-03-06 01:45:41.225 [WARNING][5496] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" HandleID="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Workload="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:41.232975 containerd[1589]: 2026-03-06 01:45:41.225 [INFO][5496] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" HandleID="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Workload="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:41.232975 containerd[1589]: 2026-03-06 01:45:41.227 [INFO][5496] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:45:41.232975 containerd[1589]: 2026-03-06 01:45:41.229 [INFO][5487] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Mar 6 01:45:41.233943 containerd[1589]: time="2026-03-06T01:45:41.232974071Z" level=info msg="TearDown network for sandbox \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\" successfully" Mar 6 01:45:41.233943 containerd[1589]: time="2026-03-06T01:45:41.232997906Z" level=info msg="StopPodSandbox for \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\" returns successfully" Mar 6 01:45:41.233943 containerd[1589]: time="2026-03-06T01:45:41.233615342Z" level=info msg="RemovePodSandbox for \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\"" Mar 6 01:45:41.233943 containerd[1589]: time="2026-03-06T01:45:41.233641830Z" level=info msg="Forcibly stopping sandbox \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\"" Mar 6 01:45:41.338531 containerd[1589]: 2026-03-06 01:45:41.287 [WARNING][5515] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" WorkloadEndpoint="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:41.338531 containerd[1589]: 2026-03-06 01:45:41.287 [INFO][5515] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Mar 6 01:45:41.338531 containerd[1589]: 2026-03-06 01:45:41.287 [INFO][5515] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" iface="eth0" netns="" Mar 6 01:45:41.338531 containerd[1589]: 2026-03-06 01:45:41.287 [INFO][5515] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Mar 6 01:45:41.338531 containerd[1589]: 2026-03-06 01:45:41.287 [INFO][5515] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Mar 6 01:45:41.338531 containerd[1589]: 2026-03-06 01:45:41.325 [INFO][5524] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" HandleID="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Workload="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:41.338531 containerd[1589]: 2026-03-06 01:45:41.325 [INFO][5524] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 01:45:41.338531 containerd[1589]: 2026-03-06 01:45:41.325 [INFO][5524] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 01:45:41.338531 containerd[1589]: 2026-03-06 01:45:41.331 [WARNING][5524] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" HandleID="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Workload="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:41.338531 containerd[1589]: 2026-03-06 01:45:41.331 [INFO][5524] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" HandleID="k8s-pod-network.93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Workload="localhost-k8s-whisker--6546896c4d--b56z6-eth0" Mar 6 01:45:41.338531 containerd[1589]: 2026-03-06 01:45:41.333 [INFO][5524] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 01:45:41.338531 containerd[1589]: 2026-03-06 01:45:41.335 [INFO][5515] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d" Mar 6 01:45:41.338931 containerd[1589]: time="2026-03-06T01:45:41.338552545Z" level=info msg="TearDown network for sandbox \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\" successfully" Mar 6 01:45:41.354905 containerd[1589]: time="2026-03-06T01:45:41.353840903Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 6 01:45:41.354905 containerd[1589]: time="2026-03-06T01:45:41.354054801Z" level=info msg="RemovePodSandbox \"93824f3eba6c598d5169c040929c69897eab3645eee2b38d15c47356a967e23d\" returns successfully" Mar 6 01:45:42.091788 systemd[1]: Started sshd@8-10.0.0.144:22-10.0.0.1:38486.service - OpenSSH per-connection server daemon (10.0.0.1:38486). Mar 6 01:45:42.160520 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 38486 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:45:42.163134 sshd[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:42.168980 systemd-logind[1567]: New session 9 of user core. Mar 6 01:45:42.183963 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 01:45:42.378644 sshd[5531]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:42.383947 systemd[1]: sshd@8-10.0.0.144:22-10.0.0.1:38486.service: Deactivated successfully. Mar 6 01:45:42.387258 systemd-logind[1567]: Session 9 logged out. Waiting for processes to exit. Mar 6 01:45:42.387354 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 01:45:42.389348 systemd-logind[1567]: Removed session 9. Mar 6 01:45:46.634779 kubelet[2674]: I0306 01:45:46.634698 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 01:45:47.389821 systemd[1]: Started sshd@9-10.0.0.144:22-10.0.0.1:38490.service - OpenSSH per-connection server daemon (10.0.0.1:38490). Mar 6 01:45:47.421811 sshd[5551]: Accepted publickey for core from 10.0.0.1 port 38490 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:45:47.423523 sshd[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:47.429209 systemd-logind[1567]: New session 10 of user core. Mar 6 01:45:47.441887 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 01:45:47.579830 sshd[5551]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:47.585396 systemd[1]: sshd@9-10.0.0.144:22-10.0.0.1:38490.service: Deactivated successfully. Mar 6 01:45:47.588727 systemd-logind[1567]: Session 10 logged out. Waiting for processes to exit. Mar 6 01:45:47.588765 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 01:45:47.593244 systemd-logind[1567]: Removed session 10. Mar 6 01:45:52.589698 systemd[1]: Started sshd@10-10.0.0.144:22-10.0.0.1:56470.service - OpenSSH per-connection server daemon (10.0.0.1:56470). Mar 6 01:45:52.637909 sshd[5578]: Accepted publickey for core from 10.0.0.1 port 56470 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:45:52.639864 sshd[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:52.645165 systemd-logind[1567]: New session 11 of user core. Mar 6 01:45:52.658676 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 01:45:52.802500 sshd[5578]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:52.807738 systemd[1]: sshd@10-10.0.0.144:22-10.0.0.1:56470.service: Deactivated successfully. Mar 6 01:45:52.811216 systemd-logind[1567]: Session 11 logged out. Waiting for processes to exit. Mar 6 01:45:52.811256 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 01:45:52.813075 systemd-logind[1567]: Removed session 11. Mar 6 01:45:54.099820 systemd[1]: run-containerd-runc-k8s.io-864338c59eb7c26fb89a4f8ec0bd00df9b491172e530bf646fb5532603d18f45-runc.Nean6y.mount: Deactivated successfully. Mar 6 01:45:57.812771 systemd[1]: Started sshd@11-10.0.0.144:22-10.0.0.1:56484.service - OpenSSH per-connection server daemon (10.0.0.1:56484). Mar 6 01:45:57.862513 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 56484 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:45:57.864758 sshd[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:45:57.870652 systemd-logind[1567]: New session 12 of user core. Mar 6 01:45:57.876113 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 01:45:58.069931 sshd[5624]: pam_unix(sshd:session): session closed for user core Mar 6 01:45:58.074669 systemd[1]: sshd@11-10.0.0.144:22-10.0.0.1:56484.service: Deactivated successfully. Mar 6 01:45:58.078933 systemd-logind[1567]: Session 12 logged out. Waiting for processes to exit. Mar 6 01:45:58.080018 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 01:45:58.081959 systemd-logind[1567]: Removed session 12. Mar 6 01:46:03.079884 systemd[1]: Started sshd@12-10.0.0.144:22-10.0.0.1:40750.service - OpenSSH per-connection server daemon (10.0.0.1:40750). Mar 6 01:46:03.125521 sshd[5677]: Accepted publickey for core from 10.0.0.1 port 40750 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:46:03.128146 sshd[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:46:03.133643 systemd-logind[1567]: New session 13 of user core. Mar 6 01:46:03.143817 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 01:46:03.206622 kubelet[2674]: E0306 01:46:03.206497 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:46:03.307127 sshd[5677]: pam_unix(sshd:session): session closed for user core Mar 6 01:46:03.317060 systemd[1]: Started sshd@13-10.0.0.144:22-10.0.0.1:40754.service - OpenSSH per-connection server daemon (10.0.0.1:40754). Mar 6 01:46:03.319415 systemd[1]: sshd@12-10.0.0.144:22-10.0.0.1:40750.service: Deactivated successfully. Mar 6 01:46:03.324342 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 01:46:03.327333 systemd-logind[1567]: Session 13 logged out. Waiting for processes to exit. Mar 6 01:46:03.329714 systemd-logind[1567]: Removed session 13. Mar 6 01:46:03.356956 sshd[5691]: Accepted publickey for core from 10.0.0.1 port 40754 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:46:03.360116 sshd[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:46:03.368116 systemd-logind[1567]: New session 14 of user core. Mar 6 01:46:03.382072 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 01:46:03.610002 sshd[5691]: pam_unix(sshd:session): session closed for user core Mar 6 01:46:03.619113 systemd[1]: Started sshd@14-10.0.0.144:22-10.0.0.1:40764.service - OpenSSH per-connection server daemon (10.0.0.1:40764). Mar 6 01:46:03.619775 systemd[1]: sshd@13-10.0.0.144:22-10.0.0.1:40754.service: Deactivated successfully. Mar 6 01:46:03.634070 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 01:46:03.635672 systemd-logind[1567]: Session 14 logged out. Waiting for processes to exit. Mar 6 01:46:03.639389 systemd-logind[1567]: Removed session 14. Mar 6 01:46:03.708565 sshd[5705]: Accepted publickey for core from 10.0.0.1 port 40764 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:46:03.711153 sshd[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:46:03.718772 systemd-logind[1567]: New session 15 of user core. Mar 6 01:46:03.729000 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 01:46:03.883581 sshd[5705]: pam_unix(sshd:session): session closed for user core Mar 6 01:46:03.888747 systemd[1]: sshd@14-10.0.0.144:22-10.0.0.1:40764.service: Deactivated successfully. Mar 6 01:46:03.892014 systemd-logind[1567]: Session 15 logged out. Waiting for processes to exit. Mar 6 01:46:03.892116 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 01:46:03.894178 systemd-logind[1567]: Removed session 15. Mar 6 01:46:05.199263 kubelet[2674]: E0306 01:46:05.199182 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:46:07.514928 systemd[1]: run-containerd-runc-k8s.io-bbca54c2ade9e089403ca6bf8fd4a877aff414d79a6db7611412b26ab492aac3-runc.qpYREx.mount: Deactivated successfully. Mar 6 01:46:08.893822 systemd[1]: Started sshd@15-10.0.0.144:22-10.0.0.1:40770.service - OpenSSH per-connection server daemon (10.0.0.1:40770). Mar 6 01:46:08.934313 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 40770 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:46:08.936853 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:46:08.942129 systemd-logind[1567]: New session 16 of user core. Mar 6 01:46:08.949224 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 01:46:09.090794 sshd[5765]: pam_unix(sshd:session): session closed for user core Mar 6 01:46:09.113930 systemd[1]: Started sshd@16-10.0.0.144:22-10.0.0.1:40774.service - OpenSSH per-connection server daemon (10.0.0.1:40774). Mar 6 01:46:09.114843 systemd[1]: sshd@15-10.0.0.144:22-10.0.0.1:40770.service: Deactivated successfully. Mar 6 01:46:09.118020 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 01:46:09.120331 systemd-logind[1567]: Session 16 logged out. Waiting for processes to exit. Mar 6 01:46:09.121828 systemd-logind[1567]: Removed session 16. Mar 6 01:46:09.157761 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 40774 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:46:09.159837 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:46:09.165989 systemd-logind[1567]: New session 17 of user core. Mar 6 01:46:09.176113 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 01:46:09.511615 sshd[5779]: pam_unix(sshd:session): session closed for user core Mar 6 01:46:09.521712 systemd[1]: Started sshd@17-10.0.0.144:22-10.0.0.1:40782.service - OpenSSH per-connection server daemon (10.0.0.1:40782). Mar 6 01:46:09.522311 systemd[1]: sshd@16-10.0.0.144:22-10.0.0.1:40774.service: Deactivated successfully. Mar 6 01:46:09.526149 systemd-logind[1567]: Session 17 logged out. Waiting for processes to exit. Mar 6 01:46:09.526335 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 01:46:09.527975 systemd-logind[1567]: Removed session 17. Mar 6 01:46:09.573766 sshd[5792]: Accepted publickey for core from 10.0.0.1 port 40782 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:46:09.576291 sshd[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:46:09.581532 systemd-logind[1567]: New session 18 of user core. Mar 6 01:46:09.591755 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 01:46:10.266991 sshd[5792]: pam_unix(sshd:session): session closed for user core Mar 6 01:46:10.279912 systemd[1]: Started sshd@18-10.0.0.144:22-10.0.0.1:40796.service - OpenSSH per-connection server daemon (10.0.0.1:40796). Mar 6 01:46:10.283863 systemd[1]: sshd@17-10.0.0.144:22-10.0.0.1:40782.service: Deactivated successfully. Mar 6 01:46:10.299103 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 01:46:10.304417 systemd-logind[1567]: Session 18 logged out. Waiting for processes to exit. Mar 6 01:46:10.310609 systemd-logind[1567]: Removed session 18. Mar 6 01:46:10.349915 sshd[5818]: Accepted publickey for core from 10.0.0.1 port 40796 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:46:10.352128 sshd[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:46:10.358514 systemd-logind[1567]: New session 19 of user core. Mar 6 01:46:10.365033 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 01:46:10.711952 sshd[5818]: pam_unix(sshd:session): session closed for user core Mar 6 01:46:10.726284 systemd[1]: Started sshd@19-10.0.0.144:22-10.0.0.1:40806.service - OpenSSH per-connection server daemon (10.0.0.1:40806). Mar 6 01:46:10.727358 systemd[1]: sshd@18-10.0.0.144:22-10.0.0.1:40796.service: Deactivated successfully. Mar 6 01:46:10.731344 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 01:46:10.732950 systemd-logind[1567]: Session 19 logged out. Waiting for processes to exit. Mar 6 01:46:10.738146 systemd-logind[1567]: Removed session 19. Mar 6 01:46:10.799921 sshd[5836]: Accepted publickey for core from 10.0.0.1 port 40806 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:46:10.800706 sshd[5836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:46:10.806405 systemd-logind[1567]: New session 20 of user core. Mar 6 01:46:10.813802 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 01:46:10.940340 sshd[5836]: pam_unix(sshd:session): session closed for user core Mar 6 01:46:10.945268 systemd[1]: sshd@19-10.0.0.144:22-10.0.0.1:40806.service: Deactivated successfully. Mar 6 01:46:10.948718 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 01:46:10.950201 systemd-logind[1567]: Session 20 logged out. Waiting for processes to exit. Mar 6 01:46:10.951807 systemd-logind[1567]: Removed session 20. Mar 6 01:46:15.199628 kubelet[2674]: E0306 01:46:15.199555 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:46:15.952993 systemd[1]: Started sshd@20-10.0.0.144:22-10.0.0.1:53764.service - OpenSSH per-connection server daemon (10.0.0.1:53764). Mar 6 01:46:15.993867 sshd[5855]: Accepted publickey for core from 10.0.0.1 port 53764 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:46:15.996486 sshd[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:46:16.003321 systemd-logind[1567]: New session 21 of user core. Mar 6 01:46:16.012068 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 01:46:16.156950 sshd[5855]: pam_unix(sshd:session): session closed for user core Mar 6 01:46:16.161380 systemd[1]: sshd@20-10.0.0.144:22-10.0.0.1:53764.service: Deactivated successfully. Mar 6 01:46:16.166070 systemd-logind[1567]: Session 21 logged out. Waiting for processes to exit. Mar 6 01:46:16.166567 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 01:46:16.167920 systemd-logind[1567]: Removed session 21. Mar 6 01:46:17.199693 kubelet[2674]: E0306 01:46:17.199581 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:46:20.200351 kubelet[2674]: E0306 01:46:20.199850 2674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 6 01:46:21.166819 systemd[1]: Started sshd@21-10.0.0.144:22-10.0.0.1:53772.service - OpenSSH per-connection server daemon (10.0.0.1:53772). Mar 6 01:46:21.235243 sshd[5872]: Accepted publickey for core from 10.0.0.1 port 53772 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:46:21.237838 sshd[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:46:21.243746 systemd-logind[1567]: New session 22 of user core. Mar 6 01:46:21.252159 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 01:46:21.425299 sshd[5872]: pam_unix(sshd:session): session closed for user core Mar 6 01:46:21.430296 systemd[1]: sshd@21-10.0.0.144:22-10.0.0.1:53772.service: Deactivated successfully. Mar 6 01:46:21.433564 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 01:46:21.433745 systemd-logind[1567]: Session 22 logged out. Waiting for processes to exit. Mar 6 01:46:21.435729 systemd-logind[1567]: Removed session 22. Mar 6 01:46:24.103215 systemd[1]: run-containerd-runc-k8s.io-864338c59eb7c26fb89a4f8ec0bd00df9b491172e530bf646fb5532603d18f45-runc.za5Gu2.mount: Deactivated successfully. Mar 6 01:46:26.446919 systemd[1]: Started sshd@22-10.0.0.144:22-10.0.0.1:33506.service - OpenSSH per-connection server daemon (10.0.0.1:33506). Mar 6 01:46:26.495981 sshd[5909]: Accepted publickey for core from 10.0.0.1 port 33506 ssh2: RSA SHA256:po+n4m2L0Y6JnDj1VTc5p26N9zFlj54R7gCeXzXqR3M Mar 6 01:46:26.498636 sshd[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 01:46:26.505391 systemd-logind[1567]: New session 23 of user core. Mar 6 01:46:26.512975 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 6 01:46:26.690185 sshd[5909]: pam_unix(sshd:session): session closed for user core Mar 6 01:46:26.696811 systemd[1]: sshd@22-10.0.0.144:22-10.0.0.1:33506.service: Deactivated successfully. Mar 6 01:46:26.700821 systemd[1]: session-23.scope: Deactivated successfully. Mar 6 01:46:26.703950 systemd-logind[1567]: Session 23 logged out. Waiting for processes to exit. Mar 6 01:46:26.706694 systemd-logind[1567]: Removed session 23.