Mar 4 01:10:17.806185 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 22:42:33 -00 2026 Mar 4 01:10:17.806271 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:10:17.806292 kernel: BIOS-provided physical RAM map: Mar 4 01:10:17.806301 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 4 01:10:17.806310 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 4 01:10:17.806318 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 4 01:10:17.806328 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 4 01:10:17.806338 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 4 01:10:17.806347 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 4 01:10:17.806359 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 4 01:10:17.806369 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 4 01:10:17.806378 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 4 01:10:17.806387 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 4 01:10:17.806396 kernel: NX (Execute Disable) protection: active Mar 4 01:10:17.806407 kernel: APIC: Static calls initialized Mar 4 01:10:17.806421 kernel: SMBIOS 2.8 present. Mar 4 01:10:17.806431 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 4 01:10:17.806441 kernel: Hypervisor detected: KVM Mar 4 01:10:17.806512 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 4 01:10:17.806522 kernel: kvm-clock: using sched offset of 5791072077 cycles Mar 4 01:10:17.806533 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 4 01:10:17.806543 kernel: tsc: Detected 2445.426 MHz processor Mar 4 01:10:17.806553 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 4 01:10:17.806564 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 4 01:10:17.806580 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 4 01:10:17.806591 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 4 01:10:17.806601 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 4 01:10:17.806612 kernel: Using GB pages for direct mapping Mar 4 01:10:17.806622 kernel: ACPI: Early table checksum verification disabled Mar 4 01:10:17.806631 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 4 01:10:17.806642 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:10:17.806652 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:10:17.806662 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:10:17.806675 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 4 01:10:17.806685 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:10:17.806695 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:10:17.806705 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:10:17.806715 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:10:17.806725 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 4 01:10:17.806735 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 4 01:10:17.806751 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 4 01:10:17.806765 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 4 01:10:17.806775 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 4 01:10:17.806785 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 4 01:10:17.806795 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 4 01:10:17.806806 kernel: No NUMA configuration found Mar 4 01:10:17.806816 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 4 01:10:17.806832 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 4 01:10:17.806843 kernel: Zone ranges: Mar 4 01:10:17.806854 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 4 01:10:17.806865 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 4 01:10:17.806875 kernel: Normal empty Mar 4 01:10:17.806884 kernel: Movable zone start for each node Mar 4 01:10:17.806894 kernel: Early memory node ranges Mar 4 01:10:17.806904 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 4 01:10:17.806914 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 4 01:10:17.806925 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 4 01:10:17.806940 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 4 01:10:17.806950 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 4 01:10:17.806960 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 4 01:10:17.806971 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 4 01:10:17.806982 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 4 01:10:17.806993 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 4 01:10:17.807003 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 4 01:10:17.807014 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 4 01:10:17.807024 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 4 01:10:17.807039 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 4 01:10:17.807049 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 4 01:10:17.807060 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 4 01:10:17.807070 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 4 01:10:17.807081 kernel: TSC deadline timer available Mar 4 01:10:17.807092 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 4 01:10:17.807102 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 4 01:10:17.807112 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 4 01:10:17.807122 kernel: kvm-guest: setup PV sched yield Mar 4 01:10:17.807137 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 4 01:10:17.807147 kernel: Booting paravirtualized kernel on KVM Mar 4 01:10:17.807158 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 4 01:10:17.807168 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 4 01:10:17.807179 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 4 01:10:17.807190 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 4 01:10:17.807200 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 4 01:10:17.807211 kernel: kvm-guest: PV spinlocks enabled Mar 4 01:10:17.807268 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 4 01:10:17.807292 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:10:17.807303 kernel: random: crng init done Mar 4 01:10:17.807313 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 4 01:10:17.807325 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 01:10:17.807335 kernel: Fallback order for Node 0: 0 Mar 4 01:10:17.807345 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 4 01:10:17.807355 kernel: Policy zone: DMA32 Mar 4 01:10:17.807366 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 01:10:17.807381 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 4 01:10:17.807391 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 4 01:10:17.807401 kernel: ftrace: allocating 37996 entries in 149 pages Mar 4 01:10:17.807412 kernel: ftrace: allocated 149 pages with 4 groups Mar 4 01:10:17.807423 kernel: Dynamic Preempt: voluntary Mar 4 01:10:17.807434 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 01:10:17.807502 kernel: rcu: RCU event tracing is enabled. Mar 4 01:10:17.807517 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 4 01:10:17.807528 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 01:10:17.807544 kernel: Rude variant of Tasks RCU enabled. Mar 4 01:10:17.807555 kernel: Tracing variant of Tasks RCU enabled. Mar 4 01:10:17.807566 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 01:10:17.807576 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 4 01:10:17.807587 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 4 01:10:17.807598 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 01:10:17.807609 kernel: Console: colour VGA+ 80x25 Mar 4 01:10:17.807619 kernel: printk: console [ttyS0] enabled Mar 4 01:10:17.807630 kernel: ACPI: Core revision 20230628 Mar 4 01:10:17.807647 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 4 01:10:17.807658 kernel: APIC: Switch to symmetric I/O mode setup Mar 4 01:10:17.807669 kernel: x2apic enabled Mar 4 01:10:17.807680 kernel: APIC: Switched APIC routing to: physical x2apic Mar 4 01:10:17.807691 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 4 01:10:17.807701 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 4 01:10:17.807712 kernel: kvm-guest: setup PV IPIs Mar 4 01:10:17.807723 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 4 01:10:17.807752 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 4 01:10:17.807765 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 4 01:10:17.807776 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 4 01:10:17.807788 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 4 01:10:17.807803 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 4 01:10:17.807816 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 4 01:10:17.807828 kernel: Spectre V2 : Mitigation: Retpolines Mar 4 01:10:17.807839 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 4 01:10:17.807850 kernel: Speculative Store Bypass: Vulnerable Mar 4 01:10:17.807866 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 4 01:10:17.807878 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 4 01:10:17.807890 kernel: active return thunk: srso_alias_return_thunk Mar 4 01:10:17.807902 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 4 01:10:17.807913 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 4 01:10:17.807925 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 4 01:10:17.807937 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 4 01:10:17.807948 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 4 01:10:17.807965 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 4 01:10:17.807977 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 4 01:10:17.807989 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 4 01:10:17.808001 kernel: Freeing SMP alternatives memory: 32K Mar 4 01:10:17.808013 kernel: pid_max: default: 32768 minimum: 301 Mar 4 01:10:17.808026 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 01:10:17.808038 kernel: landlock: Up and running. Mar 4 01:10:17.808050 kernel: SELinux: Initializing. Mar 4 01:10:17.808063 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 01:10:17.808080 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 01:10:17.808092 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 4 01:10:17.808103 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 01:10:17.808114 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 01:10:17.808126 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 01:10:17.808138 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 4 01:10:17.808149 kernel: signal: max sigframe size: 1776 Mar 4 01:10:17.808161 kernel: rcu: Hierarchical SRCU implementation. Mar 4 01:10:17.808174 kernel: rcu: Max phase no-delay instances is 400. Mar 4 01:10:17.808190 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 4 01:10:17.808202 kernel: smp: Bringing up secondary CPUs ... Mar 4 01:10:17.808214 kernel: smpboot: x86: Booting SMP configuration: Mar 4 01:10:17.813196 kernel: .... node #0, CPUs: #1 #2 #3 Mar 4 01:10:17.813214 kernel: smp: Brought up 1 node, 4 CPUs Mar 4 01:10:17.813272 kernel: smpboot: Max logical packages: 1 Mar 4 01:10:17.813285 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 4 01:10:17.813297 kernel: devtmpfs: initialized Mar 4 01:10:17.813309 kernel: x86/mm: Memory block size: 128MB Mar 4 01:10:17.813336 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 01:10:17.813348 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 4 01:10:17.813360 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 01:10:17.813373 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 01:10:17.813385 kernel: audit: initializing netlink subsys (disabled) Mar 4 01:10:17.813398 kernel: audit: type=2000 audit(1772586614.049:1): state=initialized audit_enabled=0 res=1 Mar 4 01:10:17.813410 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 01:10:17.813422 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 4 01:10:17.813435 kernel: cpuidle: using governor menu Mar 4 01:10:17.813522 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 01:10:17.813536 kernel: dca service started, version 1.12.1 Mar 4 01:10:17.813549 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 4 01:10:17.813562 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 4 01:10:17.813574 kernel: PCI: Using configuration type 1 for base access Mar 4 01:10:17.813586 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 4 01:10:17.813599 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 01:10:17.813610 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 01:10:17.813623 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 01:10:17.813642 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 01:10:17.813654 kernel: ACPI: Added _OSI(Module Device) Mar 4 01:10:17.813666 kernel: ACPI: Added _OSI(Processor Device) Mar 4 01:10:17.813679 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 01:10:17.813691 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 01:10:17.813703 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 4 01:10:17.813715 kernel: ACPI: Interpreter enabled Mar 4 01:10:17.813726 kernel: ACPI: PM: (supports S0 S3 S5) Mar 4 01:10:17.813738 kernel: ACPI: Using IOAPIC for interrupt routing Mar 4 01:10:17.813758 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 4 01:10:17.813771 kernel: PCI: Using E820 reservations for host bridge windows Mar 4 01:10:17.813783 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 4 01:10:17.813796 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 4 01:10:17.814150 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 4 01:10:17.814414 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 4 01:10:17.814697 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 4 01:10:17.814726 kernel: PCI host bridge to bus 0000:00 Mar 4 01:10:17.814928 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 4 01:10:17.815105 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 4 01:10:17.815326 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 4 01:10:17.815582 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 4 01:10:17.815756 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 4 01:10:17.815933 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 4 01:10:17.816124 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 4 01:10:17.819666 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 4 01:10:17.819881 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 4 01:10:17.820066 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 4 01:10:17.820310 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 4 01:10:17.820587 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 4 01:10:17.820783 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 4 01:10:17.821005 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 4 01:10:17.821200 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 4 01:10:17.821438 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 4 01:10:17.821698 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 4 01:10:17.821906 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 4 01:10:17.822111 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 4 01:10:17.822372 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 4 01:10:17.822640 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 4 01:10:17.822847 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 4 01:10:17.823033 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 4 01:10:17.824106 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 4 01:10:17.824366 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 4 01:10:17.824656 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 4 01:10:17.824871 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 4 01:10:17.825060 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 4 01:10:17.825299 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 11718 usecs Mar 4 01:10:17.825573 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 4 01:10:17.825766 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 4 01:10:17.825961 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 4 01:10:17.826181 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 4 01:10:17.826435 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 4 01:10:17.826515 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 4 01:10:17.826529 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 4 01:10:17.826541 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 4 01:10:17.826552 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 4 01:10:17.826563 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 4 01:10:17.826575 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 4 01:10:17.826587 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 4 01:10:17.826604 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 4 01:10:17.826616 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 4 01:10:17.826627 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 4 01:10:17.826639 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 4 01:10:17.826649 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 4 01:10:17.826661 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 4 01:10:17.826672 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 4 01:10:17.826684 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 4 01:10:17.826695 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 4 01:10:17.826711 kernel: iommu: Default domain type: Translated Mar 4 01:10:17.826722 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 4 01:10:17.826734 kernel: PCI: Using ACPI for IRQ routing Mar 4 01:10:17.826745 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 4 01:10:17.826756 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 4 01:10:17.826767 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 4 01:10:17.826964 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 4 01:10:17.827161 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 4 01:10:17.829387 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 4 01:10:17.829418 kernel: vgaarb: loaded Mar 4 01:10:17.829431 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 4 01:10:17.829507 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 4 01:10:17.829526 kernel: clocksource: Switched to clocksource kvm-clock Mar 4 01:10:17.829539 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 01:10:17.829552 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 01:10:17.829564 kernel: pnp: PnP ACPI init Mar 4 01:10:17.829791 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 4 01:10:17.829821 kernel: pnp: PnP ACPI: found 6 devices Mar 4 01:10:17.829835 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 4 01:10:17.829848 kernel: NET: Registered PF_INET protocol family Mar 4 01:10:17.829861 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 4 01:10:17.829873 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 4 01:10:17.829886 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 01:10:17.829898 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 01:10:17.829911 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 4 01:10:17.829923 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 4 01:10:17.829942 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 01:10:17.829954 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 01:10:17.829967 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 01:10:17.829979 kernel: NET: Registered PF_XDP protocol family Mar 4 01:10:17.830176 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 4 01:10:17.830412 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 4 01:10:17.830651 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 4 01:10:17.830823 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 4 01:10:17.830995 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 4 01:10:17.831159 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 4 01:10:17.831177 kernel: PCI: CLS 0 bytes, default 64 Mar 4 01:10:17.831190 kernel: Initialise system trusted keyrings Mar 4 01:10:17.831202 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 4 01:10:17.831212 kernel: Key type asymmetric registered Mar 4 01:10:17.831270 kernel: Asymmetric key parser 'x509' registered Mar 4 01:10:17.831282 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 4 01:10:17.831294 kernel: io scheduler mq-deadline registered Mar 4 01:10:17.831310 kernel: io scheduler kyber registered Mar 4 01:10:17.831320 kernel: io scheduler bfq registered Mar 4 01:10:17.831332 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 4 01:10:17.831344 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 4 01:10:17.831355 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 4 01:10:17.831367 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 4 01:10:17.831378 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 01:10:17.831390 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 4 01:10:17.831401 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 4 01:10:17.831416 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 4 01:10:17.831427 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 4 01:10:17.831684 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 4 01:10:17.831706 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 4 01:10:17.831880 kernel: rtc_cmos 00:04: registered as rtc0 Mar 4 01:10:17.832048 kernel: rtc_cmos 00:04: setting system clock to 2026-03-04T01:10:16 UTC (1772586616) Mar 4 01:10:17.834798 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 4 01:10:17.834825 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 4 01:10:17.834843 kernel: NET: Registered PF_INET6 protocol family Mar 4 01:10:17.834855 kernel: Segment Routing with IPv6 Mar 4 01:10:17.834865 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 01:10:17.834875 kernel: NET: Registered PF_PACKET protocol family Mar 4 01:10:17.834886 kernel: Key type dns_resolver registered Mar 4 01:10:17.834896 kernel: IPI shorthand broadcast: enabled Mar 4 01:10:17.834906 kernel: sched_clock: Marking stable (1942020785, 1099469897)->(3940676867, -899186185) Mar 4 01:10:17.834917 kernel: registered taskstats version 1 Mar 4 01:10:17.834927 kernel: Loading compiled-in X.509 certificates Mar 4 01:10:17.834941 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: be1dcbe3e3dee66976c19d61f4b179b405e1c498' Mar 4 01:10:17.834951 kernel: Key type .fscrypt registered Mar 4 01:10:17.834961 kernel: Key type fscrypt-provisioning registered Mar 4 01:10:17.834971 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 01:10:17.834982 kernel: ima: Allocated hash algorithm: sha1 Mar 4 01:10:17.834992 kernel: ima: No architecture policies found Mar 4 01:10:17.835002 kernel: clk: Disabling unused clocks Mar 4 01:10:17.835013 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 4 01:10:17.836174 kernel: Write protecting the kernel read-only data: 36864k Mar 4 01:10:17.836207 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 4 01:10:17.836265 kernel: Run /init as init process Mar 4 01:10:17.836279 kernel: with arguments: Mar 4 01:10:17.836291 kernel: /init Mar 4 01:10:17.836301 kernel: with environment: Mar 4 01:10:17.836311 kernel: HOME=/ Mar 4 01:10:17.836321 kernel: TERM=linux Mar 4 01:10:17.836334 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 01:10:17.836355 systemd[1]: Detected virtualization kvm. Mar 4 01:10:17.836367 systemd[1]: Detected architecture x86-64. Mar 4 01:10:17.836377 systemd[1]: Running in initrd. Mar 4 01:10:17.836388 systemd[1]: No hostname configured, using default hostname. Mar 4 01:10:17.836399 systemd[1]: Hostname set to . Mar 4 01:10:17.836410 systemd[1]: Initializing machine ID from VM UUID. Mar 4 01:10:17.836421 systemd[1]: Queued start job for default target initrd.target. Mar 4 01:10:17.836432 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:10:17.836499 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:10:17.836512 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 01:10:17.836524 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 01:10:17.836535 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 01:10:17.836546 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 01:10:17.836558 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 01:10:17.836575 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 01:10:17.836586 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:10:17.836597 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:10:17.836608 systemd[1]: Reached target paths.target - Path Units. Mar 4 01:10:17.836619 systemd[1]: Reached target slices.target - Slice Units. Mar 4 01:10:17.836647 systemd[1]: Reached target swap.target - Swaps. Mar 4 01:10:17.836662 systemd[1]: Reached target timers.target - Timer Units. Mar 4 01:10:17.836676 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:10:17.836688 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:10:17.836699 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 01:10:17.836711 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 01:10:17.836722 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:10:17.836734 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 01:10:17.836745 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:10:17.836757 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 01:10:17.836771 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 01:10:17.836783 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 01:10:17.836794 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 01:10:17.836805 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 01:10:17.836819 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 01:10:17.836831 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 01:10:17.836842 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:10:17.837422 systemd-journald[193]: Collecting audit messages is disabled. Mar 4 01:10:17.837511 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 01:10:17.837523 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:10:17.837535 systemd-journald[193]: Journal started Mar 4 01:10:17.837563 systemd-journald[193]: Runtime Journal (/run/log/journal/35213bf394864ffb937f4a15af5454c3) is 6.0M, max 48.4M, 42.3M free. Mar 4 01:10:17.866082 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 01:10:17.868377 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 01:10:17.889140 systemd-modules-load[194]: Inserted module 'overlay' Mar 4 01:10:18.118155 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 01:10:18.118189 kernel: Bridge firewalling registered Mar 4 01:10:17.892159 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 01:10:17.901730 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 01:10:17.973201 systemd-modules-load[194]: Inserted module 'br_netfilter' Mar 4 01:10:18.116149 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 01:10:18.116829 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:10:18.171431 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:10:18.181663 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 01:10:18.187507 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:10:18.202425 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:10:18.226295 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:10:18.246420 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:10:18.269111 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:10:18.282568 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:10:18.298569 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 01:10:18.304646 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 01:10:18.365042 dracut-cmdline[231]: dracut-dracut-053 Mar 4 01:10:18.375426 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:10:18.391720 systemd-resolved[232]: Positive Trust Anchors: Mar 4 01:10:18.391735 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 01:10:18.391783 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 01:10:18.396606 systemd-resolved[232]: Defaulting to hostname 'linux'. Mar 4 01:10:18.398900 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 01:10:18.403516 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:10:18.665800 kernel: SCSI subsystem initialized Mar 4 01:10:18.690701 kernel: Loading iSCSI transport class v2.0-870. Mar 4 01:10:18.741649 kernel: iscsi: registered transport (tcp) Mar 4 01:10:18.802938 kernel: iscsi: registered transport (qla4xxx) Mar 4 01:10:18.803148 kernel: QLogic iSCSI HBA Driver Mar 4 01:10:18.987725 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 01:10:19.014726 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 01:10:19.101797 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 01:10:19.101971 kernel: device-mapper: uevent: version 1.0.3 Mar 4 01:10:19.103821 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 01:10:19.220713 kernel: raid6: avx2x4 gen() 17979 MB/s Mar 4 01:10:19.241434 kernel: raid6: avx2x2 gen() 17020 MB/s Mar 4 01:10:19.265526 kernel: raid6: avx2x1 gen() 8315 MB/s Mar 4 01:10:19.265615 kernel: raid6: using algorithm avx2x4 gen() 17979 MB/s Mar 4 01:10:19.288215 kernel: raid6: .... xor() 4681 MB/s, rmw enabled Mar 4 01:10:19.288350 kernel: raid6: using avx2x2 recovery algorithm Mar 4 01:10:19.322630 kernel: xor: automatically using best checksumming function avx Mar 4 01:10:19.742932 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 01:10:19.775783 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:10:19.804783 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:10:19.834912 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 4 01:10:19.855974 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:10:19.880719 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 01:10:19.913838 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Mar 4 01:10:19.987215 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:10:20.011039 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 01:10:20.113569 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:10:20.135079 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 01:10:20.166675 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 01:10:20.175510 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:10:20.180334 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:10:20.188977 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 01:10:20.215341 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 01:10:20.236607 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 4 01:10:20.241558 kernel: cryptd: max_cpu_qlen set to 1000 Mar 4 01:10:20.251069 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:10:20.296968 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 4 01:10:20.298727 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 4 01:10:20.298752 kernel: GPT:9289727 != 19775487 Mar 4 01:10:20.298769 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 4 01:10:20.298784 kernel: GPT:9289727 != 19775487 Mar 4 01:10:20.298837 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 4 01:10:20.298857 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:10:20.251339 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:10:20.297079 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:10:20.301233 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:10:20.301603 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:10:20.323072 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:10:20.410141 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Mar 4 01:10:20.410321 kernel: BTRFS: device fsid 251c1416-ef37-47f1-be3f-832af5870605 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (462) Mar 4 01:10:20.475018 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:10:20.487722 kernel: libata version 3.00 loaded. Mar 4 01:10:20.483036 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:10:20.544659 kernel: AVX2 version of gcm_enc/dec engaged. Mar 4 01:10:20.550975 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 4 01:10:20.597095 kernel: AES CTR mode by8 optimization enabled Mar 4 01:10:20.597173 kernel: ahci 0000:00:1f.2: version 3.0 Mar 4 01:10:20.597578 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 4 01:10:20.603847 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 4 01:10:20.925710 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 4 01:10:20.926239 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 4 01:10:20.926627 kernel: scsi host0: ahci Mar 4 01:10:20.926900 kernel: scsi host1: ahci Mar 4 01:10:20.927138 kernel: scsi host2: ahci Mar 4 01:10:20.927439 kernel: scsi host3: ahci Mar 4 01:10:20.927767 kernel: scsi host4: ahci Mar 4 01:10:20.928013 kernel: scsi host5: ahci Mar 4 01:10:20.928325 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 4 01:10:20.928348 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 4 01:10:20.928366 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 4 01:10:20.928385 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 4 01:10:20.928401 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 4 01:10:20.928416 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 4 01:10:20.941015 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 4 01:10:20.951553 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 4 01:10:20.951641 kernel: ata3.00: applying bridge limits Mar 4 01:10:20.951713 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 4 01:10:20.966006 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:10:20.973549 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 4 01:10:20.991228 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 4 01:10:20.991330 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 4 01:10:20.997086 kernel: ata3.00: configured for UDMA/100 Mar 4 01:10:21.001742 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 4 01:10:21.007585 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 4 01:10:21.017597 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 4 01:10:21.019527 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 4 01:10:21.042858 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 01:10:21.079772 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 01:10:21.086786 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:10:21.111493 disk-uuid[567]: Primary Header is updated. Mar 4 01:10:21.111493 disk-uuid[567]: Secondary Entries is updated. Mar 4 01:10:21.111493 disk-uuid[567]: Secondary Header is updated. Mar 4 01:10:21.124872 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:10:21.134642 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 4 01:10:21.135016 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:10:21.135040 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 4 01:10:21.134940 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:10:21.156940 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:10:21.172685 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 4 01:10:22.191981 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:10:22.195729 disk-uuid[568]: The operation has completed successfully. Mar 4 01:10:22.284112 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 01:10:22.284350 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 01:10:22.338572 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 01:10:22.353755 sh[594]: Success Mar 4 01:10:22.389578 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 4 01:10:22.446441 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 01:10:22.481120 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 01:10:22.487579 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 01:10:22.541370 kernel: BTRFS info (device dm-0): first mount of filesystem 251c1416-ef37-47f1-be3f-832af5870605 Mar 4 01:10:22.541507 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:10:22.541532 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 01:10:22.549804 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 01:10:22.549877 kernel: BTRFS info (device dm-0): using free space tree Mar 4 01:10:22.579371 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 01:10:22.581116 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 01:10:22.593774 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 01:10:22.598730 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 01:10:22.619189 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:10:22.619366 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:10:22.619383 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:10:22.626507 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:10:22.643576 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 01:10:22.667842 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:10:22.674417 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 01:10:22.686760 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 01:10:22.778305 ignition[693]: Ignition 2.19.0 Mar 4 01:10:22.778324 ignition[693]: Stage: fetch-offline Mar 4 01:10:22.778383 ignition[693]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:10:22.778398 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:10:22.778621 ignition[693]: parsed url from cmdline: "" Mar 4 01:10:22.778629 ignition[693]: no config URL provided Mar 4 01:10:22.778640 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 01:10:22.778656 ignition[693]: no config at "/usr/lib/ignition/user.ign" Mar 4 01:10:22.778699 ignition[693]: op(1): [started] loading QEMU firmware config module Mar 4 01:10:22.778707 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 4 01:10:22.796751 ignition[693]: op(1): [finished] loading QEMU firmware config module Mar 4 01:10:22.838078 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:10:22.867009 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 01:10:22.905090 systemd-networkd[783]: lo: Link UP Mar 4 01:10:22.905120 systemd-networkd[783]: lo: Gained carrier Mar 4 01:10:22.907433 systemd-networkd[783]: Enumeration completed Mar 4 01:10:22.908580 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 01:10:22.910588 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:10:22.910594 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:10:22.912437 systemd-networkd[783]: eth0: Link UP Mar 4 01:10:22.912488 systemd-networkd[783]: eth0: Gained carrier Mar 4 01:10:22.912501 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:10:22.919940 systemd[1]: Reached target network.target - Network. Mar 4 01:10:22.975735 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 4 01:10:23.092931 systemd-resolved[232]: Detected conflict on linux IN A 10.0.0.98 Mar 4 01:10:23.092973 systemd-resolved[232]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Mar 4 01:10:23.175879 ignition[693]: parsing config with SHA512: 3dc3827e8aff8a30cf11bfa12cccd2ceecc4a4b62b52b3e80cc3c68e4dc444db50e32c3decfcbe9d586c63c8034cb0b23f6ce9ed5e78cfe8c6029eda226237a0 Mar 4 01:10:23.180179 unknown[693]: fetched base config from "system" Mar 4 01:10:23.180936 unknown[693]: fetched user config from "qemu" Mar 4 01:10:23.181742 ignition[693]: fetch-offline: fetch-offline passed Mar 4 01:10:23.181875 ignition[693]: Ignition finished successfully Mar 4 01:10:23.199170 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:10:23.203243 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 4 01:10:23.225155 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 01:10:23.247110 ignition[787]: Ignition 2.19.0 Mar 4 01:10:23.247179 ignition[787]: Stage: kargs Mar 4 01:10:23.247643 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:10:23.247663 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:10:23.249088 ignition[787]: kargs: kargs passed Mar 4 01:10:23.249157 ignition[787]: Ignition finished successfully Mar 4 01:10:23.285389 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 01:10:23.306851 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 01:10:23.335663 ignition[795]: Ignition 2.19.0 Mar 4 01:10:23.335708 ignition[795]: Stage: disks Mar 4 01:10:23.335957 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:10:23.335976 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:10:23.337392 ignition[795]: disks: disks passed Mar 4 01:10:23.337526 ignition[795]: Ignition finished successfully Mar 4 01:10:23.371861 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 01:10:23.383169 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 01:10:23.387015 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 01:10:23.400615 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 01:10:23.404386 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 01:10:23.410555 systemd[1]: Reached target basic.target - Basic System. Mar 4 01:10:23.448910 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 01:10:23.509507 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 4 01:10:23.521118 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 01:10:23.551022 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 01:10:23.719531 kernel: EXT4-fs (vda9): mounted filesystem 77c4d29a-0423-4e33-8b82-61754d97532c r/w with ordered data mode. Quota mode: none. Mar 4 01:10:23.720440 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 01:10:23.727349 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 01:10:23.746697 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:10:23.752989 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 01:10:23.768427 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 4 01:10:23.792696 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Mar 4 01:10:23.792726 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:10:23.792742 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:10:23.792758 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:10:23.768548 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 01:10:23.768588 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:10:23.791171 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 01:10:23.823100 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:10:23.811362 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 01:10:23.825243 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:10:23.888792 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 01:10:23.897736 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 4 01:10:23.906148 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 01:10:23.913634 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 01:10:24.087137 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 01:10:24.103621 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 01:10:24.106261 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 01:10:24.114719 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 01:10:24.125104 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:10:24.142517 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 01:10:24.173498 ignition[928]: INFO : Ignition 2.19.0 Mar 4 01:10:24.173498 ignition[928]: INFO : Stage: mount Mar 4 01:10:24.173498 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:10:24.173498 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:10:24.190717 ignition[928]: INFO : mount: mount passed Mar 4 01:10:24.190717 ignition[928]: INFO : Ignition finished successfully Mar 4 01:10:24.178421 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 01:10:24.196968 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 01:10:24.296057 systemd-networkd[783]: eth0: Gained IPv6LL Mar 4 01:10:24.733907 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:10:24.747631 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Mar 4 01:10:24.759047 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:10:24.759131 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:10:24.759147 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:10:24.781680 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:10:24.784736 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:10:24.822121 ignition[958]: INFO : Ignition 2.19.0 Mar 4 01:10:24.822121 ignition[958]: INFO : Stage: files Mar 4 01:10:24.829858 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:10:24.829858 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:10:24.829858 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Mar 4 01:10:24.829858 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 01:10:24.829858 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 01:10:24.829858 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 01:10:24.829858 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 01:10:24.829858 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 01:10:24.829688 unknown[958]: wrote ssh authorized keys file for user: core Mar 4 01:10:24.901559 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 4 01:10:24.901559 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 4 01:10:24.901559 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 01:10:24.901559 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 4 01:10:24.963966 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 4 01:10:25.030657 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 01:10:25.030657 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 4 01:10:25.030657 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 01:10:25.030657 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:10:25.055952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:10:25.055952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:10:25.055952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:10:25.055952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:10:25.055952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:10:25.055952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:10:25.055952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:10:25.055952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 01:10:25.055952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 01:10:25.055952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 01:10:25.055952 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 4 01:10:25.330043 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 4 01:10:26.105270 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 01:10:26.105270 ignition[958]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 4 01:10:26.128541 ignition[958]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 4 01:10:26.146256 ignition[958]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 4 01:10:26.146256 ignition[958]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 4 01:10:26.146256 ignition[958]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 4 01:10:26.146256 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:10:26.146256 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:10:26.146256 ignition[958]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 4 01:10:26.146256 ignition[958]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Mar 4 01:10:26.146256 ignition[958]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 4 01:10:26.146256 ignition[958]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 4 01:10:26.146256 ignition[958]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Mar 4 01:10:26.146256 ignition[958]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Mar 4 01:10:26.272634 ignition[958]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 4 01:10:26.282314 ignition[958]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 4 01:10:26.282314 ignition[958]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Mar 4 01:10:26.302360 ignition[958]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Mar 4 01:10:26.302360 ignition[958]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 01:10:26.302360 ignition[958]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:10:26.302360 ignition[958]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:10:26.302360 ignition[958]: INFO : files: files passed Mar 4 01:10:26.302360 ignition[958]: INFO : Ignition finished successfully Mar 4 01:10:26.291120 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 01:10:26.324059 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 01:10:26.342560 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 01:10:26.351542 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 01:10:26.391531 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Mar 4 01:10:26.351797 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 01:10:26.402971 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:10:26.402971 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:10:26.374422 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:10:26.423717 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:10:26.382571 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 01:10:26.423901 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 01:10:26.488336 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 01:10:26.488595 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 01:10:26.496118 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 01:10:26.502947 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 01:10:26.510062 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 01:10:26.525830 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 01:10:26.545904 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:10:26.565711 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 01:10:26.581065 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:10:26.584660 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:10:26.590703 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 01:10:26.596017 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 01:10:26.596175 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:10:26.602532 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 01:10:26.607208 systemd[1]: Stopped target basic.target - Basic System. Mar 4 01:10:26.613053 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 01:10:26.618914 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:10:26.624352 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 01:10:26.630134 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 01:10:26.635782 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:10:26.640329 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 01:10:26.641251 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 01:10:26.641514 systemd[1]: Stopped target swap.target - Swaps. Mar 4 01:10:26.641878 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 01:10:26.642077 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:10:26.643202 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:10:26.644063 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:10:26.736392 ignition[1012]: INFO : Ignition 2.19.0 Mar 4 01:10:26.736392 ignition[1012]: INFO : Stage: umount Mar 4 01:10:26.736392 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:10:26.736392 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:10:26.736392 ignition[1012]: INFO : umount: umount passed Mar 4 01:10:26.736392 ignition[1012]: INFO : Ignition finished successfully Mar 4 01:10:26.644514 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 01:10:26.644847 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:10:26.645375 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 01:10:26.645531 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 01:10:26.646206 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 01:10:26.646364 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:10:26.647260 systemd[1]: Stopped target paths.target - Path Units. Mar 4 01:10:26.650365 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 01:10:26.650730 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:10:26.651129 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 01:10:26.657216 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 01:10:26.658349 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 01:10:26.658520 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:10:26.658831 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 01:10:26.658912 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:10:26.659595 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 01:10:26.659732 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:10:26.659950 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 01:10:26.660059 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 01:10:26.701065 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 01:10:26.705355 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 01:10:26.708814 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 01:10:26.709012 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:10:26.712746 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 01:10:26.712899 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:10:26.724895 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 01:10:26.725052 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 01:10:26.732827 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 01:10:26.733062 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 01:10:26.738581 systemd[1]: Stopped target network.target - Network. Mar 4 01:10:26.743498 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 01:10:26.743806 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 01:10:26.758859 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 01:10:26.759049 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 01:10:26.767270 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 01:10:26.767397 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 01:10:26.772123 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 01:10:26.772260 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 01:10:26.777007 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 01:10:26.780533 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 01:10:26.784022 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 01:10:26.787562 systemd-networkd[783]: eth0: DHCPv6 lease lost Mar 4 01:10:26.788195 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 01:10:26.788391 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 01:10:26.797114 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 01:10:26.797360 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 01:10:26.803653 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 01:10:26.803734 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:10:26.821821 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 01:10:26.827240 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 01:10:26.827413 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:10:26.835533 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 01:10:26.835729 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:10:26.843617 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 01:10:26.843720 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 01:10:26.852004 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 01:10:26.852235 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:10:26.863205 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:10:27.062554 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Mar 4 01:10:26.870060 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 01:10:26.870245 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 01:10:26.887575 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 01:10:26.887679 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 01:10:26.891740 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 01:10:26.891956 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:10:26.898056 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 01:10:26.898187 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 01:10:26.903382 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 01:10:26.903602 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 01:10:26.909662 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 01:10:26.909746 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:10:26.915776 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 01:10:26.915948 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:10:26.921404 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 01:10:26.921509 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 01:10:26.927919 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:10:26.927985 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:10:26.958521 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 01:10:26.962656 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 01:10:26.962767 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:10:26.968684 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 4 01:10:26.968747 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:10:26.975071 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 01:10:26.975164 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:10:26.980615 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:10:26.980799 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:10:26.982536 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 01:10:26.982705 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 01:10:26.984394 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 01:10:26.998485 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 01:10:27.007005 systemd[1]: Switching root. Mar 4 01:10:27.169998 systemd-journald[193]: Journal stopped Mar 4 01:10:28.419035 kernel: SELinux: policy capability network_peer_controls=1 Mar 4 01:10:28.419109 kernel: SELinux: policy capability open_perms=1 Mar 4 01:10:28.419130 kernel: SELinux: policy capability extended_socket_class=1 Mar 4 01:10:28.419141 kernel: SELinux: policy capability always_check_network=0 Mar 4 01:10:28.419151 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 4 01:10:28.419164 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 4 01:10:28.419174 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 4 01:10:28.419184 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 4 01:10:28.419194 kernel: audit: type=1403 audit(1772586627.363:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 4 01:10:28.419211 systemd[1]: Successfully loaded SELinux policy in 82.599ms. Mar 4 01:10:28.419233 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.602ms. Mar 4 01:10:28.419244 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 01:10:28.419255 systemd[1]: Detected virtualization kvm. Mar 4 01:10:28.419266 systemd[1]: Detected architecture x86-64. Mar 4 01:10:28.419279 systemd[1]: Detected first boot. Mar 4 01:10:28.419290 systemd[1]: Initializing machine ID from VM UUID. Mar 4 01:10:28.419335 zram_generator::config[1073]: No configuration found. Mar 4 01:10:28.419356 systemd[1]: Populated /etc with preset unit settings. Mar 4 01:10:28.419367 systemd[1]: Queued start job for default target multi-user.target. Mar 4 01:10:28.419377 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 4 01:10:28.419388 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 4 01:10:28.419399 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 4 01:10:28.419414 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 4 01:10:28.419424 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 4 01:10:28.419435 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 4 01:10:28.419577 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 4 01:10:28.419594 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 4 01:10:28.419612 systemd[1]: Created slice user.slice - User and Session Slice. Mar 4 01:10:28.419623 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:10:28.419654 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:10:28.419665 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 4 01:10:28.419694 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 4 01:10:28.419720 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 4 01:10:28.419744 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 01:10:28.419755 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 4 01:10:28.419779 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:10:28.419804 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 4 01:10:28.419828 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:10:28.419852 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 01:10:28.419863 systemd[1]: Reached target slices.target - Slice Units. Mar 4 01:10:28.419877 systemd[1]: Reached target swap.target - Swaps. Mar 4 01:10:28.419903 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 4 01:10:28.419935 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 4 01:10:28.419947 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 01:10:28.419958 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 01:10:28.419969 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:10:28.423752 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 01:10:28.423773 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:10:28.423789 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 4 01:10:28.423800 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 4 01:10:28.423811 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 4 01:10:28.423822 systemd[1]: Mounting media.mount - External Media Directory... Mar 4 01:10:28.423832 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:10:28.423843 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 4 01:10:28.423854 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 4 01:10:28.423864 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 4 01:10:28.423875 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 4 01:10:28.423888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:10:28.423899 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 01:10:28.423910 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 4 01:10:28.423920 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:10:28.423931 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 01:10:28.423941 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:10:28.423952 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 4 01:10:28.423964 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:10:28.423977 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 4 01:10:28.423988 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 4 01:10:28.424000 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 4 01:10:28.424010 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 01:10:28.424021 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 01:10:28.424031 kernel: fuse: init (API version 7.39) Mar 4 01:10:28.424042 kernel: loop: module loaded Mar 4 01:10:28.424073 systemd-journald[1165]: Collecting audit messages is disabled. Mar 4 01:10:28.424098 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 4 01:10:28.424110 systemd-journald[1165]: Journal started Mar 4 01:10:28.424130 systemd-journald[1165]: Runtime Journal (/run/log/journal/35213bf394864ffb937f4a15af5454c3) is 6.0M, max 48.4M, 42.3M free. Mar 4 01:10:28.433528 kernel: ACPI: bus type drm_connector registered Mar 4 01:10:28.433568 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 4 01:10:28.448361 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 01:10:28.448428 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:10:28.459516 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 01:10:28.460859 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 4 01:10:28.463588 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 4 01:10:28.466436 systemd[1]: Mounted media.mount - External Media Directory. Mar 4 01:10:28.468935 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 4 01:10:28.471676 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 4 01:10:28.474547 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 4 01:10:28.477286 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 4 01:10:28.480643 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:10:28.483967 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 4 01:10:28.484194 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 4 01:10:28.487508 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:10:28.487720 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:10:28.490828 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 01:10:28.491042 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 01:10:28.494005 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:10:28.494221 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:10:28.497595 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 4 01:10:28.497804 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 4 01:10:28.500813 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:10:28.501123 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:10:28.504375 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 01:10:28.507668 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 4 01:10:28.511066 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 4 01:10:28.526284 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 4 01:10:28.537653 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 4 01:10:28.541952 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 4 01:10:28.544947 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 4 01:10:28.547357 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 4 01:10:28.551389 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 4 01:10:28.554433 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 01:10:28.555807 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 4 01:10:28.558616 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 01:10:28.561111 systemd-journald[1165]: Time spent on flushing to /var/log/journal/35213bf394864ffb937f4a15af5454c3 is 11.322ms for 933 entries. Mar 4 01:10:28.561111 systemd-journald[1165]: System Journal (/var/log/journal/35213bf394864ffb937f4a15af5454c3) is 8.0M, max 195.6M, 187.6M free. Mar 4 01:10:28.587215 systemd-journald[1165]: Received client request to flush runtime journal. Mar 4 01:10:28.562619 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:10:28.568631 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 01:10:28.574382 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:10:28.577726 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 4 01:10:28.587588 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 4 01:10:28.591397 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 4 01:10:28.595505 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 4 01:10:28.602720 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 4 01:10:28.615643 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 4 01:10:28.619376 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:10:28.622398 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Mar 4 01:10:28.622412 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Mar 4 01:10:28.626405 udevadm[1225]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 4 01:10:28.629816 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:10:28.638593 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 4 01:10:28.668924 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 4 01:10:28.677583 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 01:10:28.695391 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Mar 4 01:10:28.695426 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Mar 4 01:10:28.701866 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:10:29.010238 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 4 01:10:29.027725 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:10:29.063266 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Mar 4 01:10:29.095095 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:10:29.114633 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 01:10:29.129676 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 4 01:10:29.138009 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 4 01:10:29.172501 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1252) Mar 4 01:10:29.214669 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 4 01:10:29.225572 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 4 01:10:29.234500 kernel: ACPI: button: Power Button [PWRF] Mar 4 01:10:29.265211 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 4 01:10:29.265587 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 4 01:10:29.265789 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 4 01:10:29.276671 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 01:10:29.290502 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 4 01:10:29.295732 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:10:29.313742 systemd-networkd[1249]: lo: Link UP Mar 4 01:10:29.313751 systemd-networkd[1249]: lo: Gained carrier Mar 4 01:10:29.315910 systemd-networkd[1249]: Enumeration completed Mar 4 01:10:29.316071 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 01:10:29.319275 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:10:29.319370 systemd-networkd[1249]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:10:29.320520 systemd-networkd[1249]: eth0: Link UP Mar 4 01:10:29.320580 systemd-networkd[1249]: eth0: Gained carrier Mar 4 01:10:29.320628 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:10:29.394234 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 4 01:10:29.402588 kernel: mousedev: PS/2 mouse device common for all mice Mar 4 01:10:29.404545 systemd-networkd[1249]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 4 01:10:29.426737 kernel: kvm_amd: TSC scaling supported Mar 4 01:10:29.427063 kernel: kvm_amd: Nested Virtualization enabled Mar 4 01:10:29.427236 kernel: kvm_amd: Nested Paging enabled Mar 4 01:10:29.427523 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 4 01:10:29.427590 kernel: kvm_amd: PMU virtualization is disabled Mar 4 01:10:29.471519 kernel: EDAC MC: Ver: 3.0.0 Mar 4 01:10:29.535657 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 4 01:10:29.539391 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:10:29.555619 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 4 01:10:29.566763 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 01:10:29.598563 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 4 01:10:29.602192 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:10:29.624631 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 4 01:10:29.631983 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 01:10:29.666576 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 4 01:10:29.670129 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 01:10:29.673390 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 4 01:10:29.673416 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 01:10:29.676146 systemd[1]: Reached target machines.target - Containers. Mar 4 01:10:29.679624 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 4 01:10:29.692736 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 4 01:10:29.697510 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 4 01:10:29.700228 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:10:29.701367 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 4 01:10:29.705786 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 4 01:10:29.712573 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 4 01:10:29.719145 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 4 01:10:29.725383 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 4 01:10:29.727508 kernel: loop0: detected capacity change from 0 to 228704 Mar 4 01:10:29.740584 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 4 01:10:29.741400 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 4 01:10:29.754515 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 4 01:10:29.785497 kernel: loop1: detected capacity change from 0 to 142488 Mar 4 01:10:29.834502 kernel: loop2: detected capacity change from 0 to 140768 Mar 4 01:10:29.884552 kernel: loop3: detected capacity change from 0 to 228704 Mar 4 01:10:29.902499 kernel: loop4: detected capacity change from 0 to 142488 Mar 4 01:10:29.924531 kernel: loop5: detected capacity change from 0 to 140768 Mar 4 01:10:29.939245 (sd-merge)[1308]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 4 01:10:29.940017 (sd-merge)[1308]: Merged extensions into '/usr'. Mar 4 01:10:29.944906 systemd[1]: Reloading requested from client PID 1296 ('systemd-sysext') (unit systemd-sysext.service)... Mar 4 01:10:29.944953 systemd[1]: Reloading... Mar 4 01:10:30.015547 zram_generator::config[1336]: No configuration found. Mar 4 01:10:30.020972 ldconfig[1293]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 4 01:10:30.143613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:10:30.209853 systemd[1]: Reloading finished in 264 ms. Mar 4 01:10:30.227575 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 4 01:10:30.232174 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 4 01:10:30.255734 systemd[1]: Starting ensure-sysext.service... Mar 4 01:10:30.259796 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 01:10:30.266818 systemd[1]: Reloading requested from client PID 1380 ('systemctl') (unit ensure-sysext.service)... Mar 4 01:10:30.266860 systemd[1]: Reloading... Mar 4 01:10:30.290746 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 4 01:10:30.291114 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 4 01:10:30.292164 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 4 01:10:30.292566 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Mar 4 01:10:30.292669 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Mar 4 01:10:30.297632 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 01:10:30.297662 systemd-tmpfiles[1381]: Skipping /boot Mar 4 01:10:30.319292 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 01:10:30.319311 systemd-tmpfiles[1381]: Skipping /boot Mar 4 01:10:30.331535 zram_generator::config[1410]: No configuration found. Mar 4 01:10:30.486102 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:10:30.503731 systemd-networkd[1249]: eth0: Gained IPv6LL Mar 4 01:10:30.550987 systemd[1]: Reloading finished in 283 ms. Mar 4 01:10:30.576954 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 4 01:10:30.591732 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:10:30.610044 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:10:30.622857 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:10:30.628907 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 4 01:10:30.632164 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:10:30.633887 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:10:30.639669 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:10:30.645054 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:10:30.649984 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:10:30.653684 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 4 01:10:30.668826 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 01:10:30.674760 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 4 01:10:30.678908 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:10:30.681926 augenrules[1483]: No rules Mar 4 01:10:30.682100 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:10:30.682359 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:10:30.699095 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:10:30.705632 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:10:30.705873 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:10:30.710148 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:10:30.710478 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:10:30.714518 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 4 01:10:30.718680 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 4 01:10:30.738959 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:10:30.739232 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:10:30.740990 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:10:30.746139 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:10:30.752806 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:10:30.755863 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:10:30.760515 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 4 01:10:30.762582 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 4 01:10:30.762780 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:10:30.767833 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 4 01:10:30.772007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:10:30.772217 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:10:30.776085 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:10:30.776305 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:10:30.780093 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:10:30.780548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:10:30.788686 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 4 01:10:30.795716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:10:30.796060 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:10:30.804874 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:10:30.809407 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 01:10:30.813654 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:10:30.816198 systemd-resolved[1476]: Positive Trust Anchors: Mar 4 01:10:30.816210 systemd-resolved[1476]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 01:10:30.816237 systemd-resolved[1476]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 01:10:30.819759 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:10:30.820219 systemd-resolved[1476]: Defaulting to hostname 'linux'. Mar 4 01:10:30.823934 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:10:30.824119 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 4 01:10:30.824237 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:10:30.825362 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 01:10:30.828953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:10:30.829204 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:10:30.832825 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 01:10:30.833077 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 01:10:30.836496 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:10:30.844923 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:10:30.852272 systemd[1]: Finished ensure-sysext.service. Mar 4 01:10:30.855084 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:10:30.855374 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:10:30.862640 systemd[1]: Reached target network.target - Network. Mar 4 01:10:30.864958 systemd[1]: Reached target network-online.target - Network is Online. Mar 4 01:10:30.867707 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:10:30.870742 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 01:10:30.870822 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 01:10:30.881829 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 4 01:10:30.946059 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 4 01:10:32.089900 systemd-timesyncd[1531]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 4 01:10:32.089903 systemd-resolved[1476]: Clock change detected. Flushing caches. Mar 4 01:10:32.090172 systemd-timesyncd[1531]: Initial clock synchronization to Wed 2026-03-04 01:10:32.089749 UTC. Mar 4 01:10:32.094494 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 01:10:32.097453 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 4 01:10:32.100655 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 4 01:10:32.103916 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 4 01:10:32.107258 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 4 01:10:32.107305 systemd[1]: Reached target paths.target - Path Units. Mar 4 01:10:32.109665 systemd[1]: Reached target time-set.target - System Time Set. Mar 4 01:10:32.112574 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 4 01:10:32.115601 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 4 01:10:32.118887 systemd[1]: Reached target timers.target - Timer Units. Mar 4 01:10:32.121903 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 4 01:10:32.126746 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 4 01:10:32.130515 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 4 01:10:32.149162 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 4 01:10:32.152012 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 01:10:32.154550 systemd[1]: Reached target basic.target - Basic System. Mar 4 01:10:32.157220 systemd[1]: System is tainted: cgroupsv1 Mar 4 01:10:32.157278 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 4 01:10:32.157317 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 4 01:10:32.158847 systemd[1]: Starting containerd.service - containerd container runtime... Mar 4 01:10:32.162865 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 4 01:10:32.166832 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 4 01:10:32.173186 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 4 01:10:32.179317 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 4 01:10:32.182409 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 4 01:10:32.186320 jq[1539]: false Mar 4 01:10:32.186664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:10:32.193938 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 4 01:10:32.202553 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 4 01:10:32.208644 extend-filesystems[1541]: Found loop3 Mar 4 01:10:32.212360 extend-filesystems[1541]: Found loop4 Mar 4 01:10:32.212360 extend-filesystems[1541]: Found loop5 Mar 4 01:10:32.212360 extend-filesystems[1541]: Found sr0 Mar 4 01:10:32.212360 extend-filesystems[1541]: Found vda Mar 4 01:10:32.212360 extend-filesystems[1541]: Found vda1 Mar 4 01:10:32.212360 extend-filesystems[1541]: Found vda2 Mar 4 01:10:32.212360 extend-filesystems[1541]: Found vda3 Mar 4 01:10:32.212360 extend-filesystems[1541]: Found usr Mar 4 01:10:32.212360 extend-filesystems[1541]: Found vda4 Mar 4 01:10:32.212360 extend-filesystems[1541]: Found vda6 Mar 4 01:10:32.212360 extend-filesystems[1541]: Found vda7 Mar 4 01:10:32.212360 extend-filesystems[1541]: Found vda9 Mar 4 01:10:32.212360 extend-filesystems[1541]: Checking size of /dev/vda9 Mar 4 01:10:32.304563 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 4 01:10:32.215164 dbus-daemon[1537]: [system] SELinux support is enabled Mar 4 01:10:32.215554 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 4 01:10:32.307139 extend-filesystems[1541]: Resized partition /dev/vda9 Mar 4 01:10:32.332040 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 4 01:10:32.332066 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1250) Mar 4 01:10:32.226572 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 4 01:10:32.334497 extend-filesystems[1566]: resize2fs 1.47.1 (20-May-2024) Mar 4 01:10:32.334497 extend-filesystems[1566]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 4 01:10:32.334497 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 4 01:10:32.334497 extend-filesystems[1566]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 4 01:10:32.231205 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 4 01:10:32.354414 extend-filesystems[1541]: Resized filesystem in /dev/vda9 Mar 4 01:10:32.243563 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 4 01:10:32.245447 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 4 01:10:32.251888 systemd[1]: Starting update-engine.service - Update Engine... Mar 4 01:10:32.357706 jq[1570]: true Mar 4 01:10:32.272441 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 4 01:10:32.357919 update_engine[1564]: I20260304 01:10:32.301378 1564 main.cc:92] Flatcar Update Engine starting Mar 4 01:10:32.357919 update_engine[1564]: I20260304 01:10:32.303512 1564 update_check_scheduler.cc:74] Next update check in 6m7s Mar 4 01:10:32.281273 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 4 01:10:32.288517 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 4 01:10:32.288817 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 4 01:10:32.298422 systemd[1]: motdgen.service: Deactivated successfully. Mar 4 01:10:32.369707 tar[1581]: linux-amd64/LICENSE Mar 4 01:10:32.369707 tar[1581]: linux-amd64/helm Mar 4 01:10:32.298722 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 4 01:10:32.305589 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 4 01:10:32.315430 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 4 01:10:32.315713 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 4 01:10:32.341670 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 4 01:10:32.342032 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 4 01:10:32.360690 systemd-logind[1558]: Watching system buttons on /dev/input/event1 (Power Button) Mar 4 01:10:32.360710 systemd-logind[1558]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 4 01:10:32.362216 systemd-logind[1558]: New seat seat0. Mar 4 01:10:32.370027 systemd[1]: Started systemd-logind.service - User Login Management. Mar 4 01:10:32.370369 (ntainerd)[1585]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 4 01:10:32.378906 jq[1584]: true Mar 4 01:10:32.373941 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 4 01:10:32.374368 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 4 01:10:32.408469 systemd[1]: Started update-engine.service - Update Engine. Mar 4 01:10:32.415142 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 4 01:10:32.415316 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 4 01:10:32.415425 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 4 01:10:32.418949 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 4 01:10:32.419158 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 4 01:10:32.423064 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 4 01:10:32.427384 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 4 01:10:32.460674 bash[1621]: Updated "/home/core/.ssh/authorized_keys" Mar 4 01:10:32.462153 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 4 01:10:32.462634 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 4 01:10:32.466847 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 4 01:10:32.561688 sshd_keygen[1572]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 4 01:10:32.589872 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 4 01:10:32.605488 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 4 01:10:32.609391 containerd[1585]: time="2026-03-04T01:10:32.609324274Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 4 01:10:32.618415 systemd[1]: issuegen.service: Deactivated successfully. Mar 4 01:10:32.618746 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 4 01:10:32.633586 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 4 01:10:32.644858 containerd[1585]: time="2026-03-04T01:10:32.644570823Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:10:32.648686 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 4 01:10:32.650293 containerd[1585]: time="2026-03-04T01:10:32.648205517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:10:32.650293 containerd[1585]: time="2026-03-04T01:10:32.648232347Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 4 01:10:32.650293 containerd[1585]: time="2026-03-04T01:10:32.648246834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 4 01:10:32.650293 containerd[1585]: time="2026-03-04T01:10:32.648417904Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 4 01:10:32.650293 containerd[1585]: time="2026-03-04T01:10:32.648432491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 4 01:10:32.650293 containerd[1585]: time="2026-03-04T01:10:32.648504094Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:10:32.650293 containerd[1585]: time="2026-03-04T01:10:32.648516648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:10:32.650293 containerd[1585]: time="2026-03-04T01:10:32.648741117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:10:32.650293 containerd[1585]: time="2026-03-04T01:10:32.648755674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 4 01:10:32.650293 containerd[1585]: time="2026-03-04T01:10:32.648772375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:10:32.650293 containerd[1585]: time="2026-03-04T01:10:32.648780530Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 4 01:10:32.650592 containerd[1585]: time="2026-03-04T01:10:32.648868374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:10:32.650592 containerd[1585]: time="2026-03-04T01:10:32.649258893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:10:32.650592 containerd[1585]: time="2026-03-04T01:10:32.649490155Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:10:32.650592 containerd[1585]: time="2026-03-04T01:10:32.649514791Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 4 01:10:32.650592 containerd[1585]: time="2026-03-04T01:10:32.649668889Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 4 01:10:32.650592 containerd[1585]: time="2026-03-04T01:10:32.649746754Z" level=info msg="metadata content store policy set" policy=shared Mar 4 01:10:32.655735 containerd[1585]: time="2026-03-04T01:10:32.655703312Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 4 01:10:32.655869 containerd[1585]: time="2026-03-04T01:10:32.655854805Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 4 01:10:32.656040 containerd[1585]: time="2026-03-04T01:10:32.656025513Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 4 01:10:32.656138 containerd[1585]: time="2026-03-04T01:10:32.656123817Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 4 01:10:32.656207 containerd[1585]: time="2026-03-04T01:10:32.656193527Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 4 01:10:32.656390 containerd[1585]: time="2026-03-04T01:10:32.656374775Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 4 01:10:32.657332 containerd[1585]: time="2026-03-04T01:10:32.657178756Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 4 01:10:32.657728 containerd[1585]: time="2026-03-04T01:10:32.657711901Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 4 01:10:32.657836 containerd[1585]: time="2026-03-04T01:10:32.657823219Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 4 01:10:32.657935 containerd[1585]: time="2026-03-04T01:10:32.657922134Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 4 01:10:32.658049 containerd[1585]: time="2026-03-04T01:10:32.658036106Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 4 01:10:32.658184 containerd[1585]: time="2026-03-04T01:10:32.658168293Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 4 01:10:32.658294 containerd[1585]: time="2026-03-04T01:10:32.658281685Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 4 01:10:32.658395 containerd[1585]: time="2026-03-04T01:10:32.658381070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 4 01:10:32.658498 containerd[1585]: time="2026-03-04T01:10:32.658484623Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 4 01:10:32.658579 containerd[1585]: time="2026-03-04T01:10:32.658533775Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 4 01:10:32.658775 containerd[1585]: time="2026-03-04T01:10:32.658633712Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 4 01:10:32.658935 containerd[1585]: time="2026-03-04T01:10:32.658917411Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 4 01:10:32.659072 containerd[1585]: time="2026-03-04T01:10:32.659058294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.659201 containerd[1585]: time="2026-03-04T01:10:32.659188677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.659301 containerd[1585]: time="2026-03-04T01:10:32.659287242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.659374 containerd[1585]: time="2026-03-04T01:10:32.659362141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.659469 containerd[1585]: time="2026-03-04T01:10:32.659455346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.659541 containerd[1585]: time="2026-03-04T01:10:32.659529704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.659621 containerd[1585]: time="2026-03-04T01:10:32.659608752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.659785 containerd[1585]: time="2026-03-04T01:10:32.659676769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.659921 containerd[1585]: time="2026-03-04T01:10:32.659905987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.660699 containerd[1585]: time="2026-03-04T01:10:32.660024177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.660699 containerd[1585]: time="2026-03-04T01:10:32.660040577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.660699 containerd[1585]: time="2026-03-04T01:10:32.660051468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.660699 containerd[1585]: time="2026-03-04T01:10:32.660064703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.660699 containerd[1585]: time="2026-03-04T01:10:32.660112472Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 4 01:10:32.660699 containerd[1585]: time="2026-03-04T01:10:32.660133612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.660699 containerd[1585]: time="2026-03-04T01:10:32.660144922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.660699 containerd[1585]: time="2026-03-04T01:10:32.660154159Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 4 01:10:32.660699 containerd[1585]: time="2026-03-04T01:10:32.660204012Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 4 01:10:32.660699 containerd[1585]: time="2026-03-04T01:10:32.660220764Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 4 01:10:32.660699 containerd[1585]: time="2026-03-04T01:10:32.660230582Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 4 01:10:32.660699 containerd[1585]: time="2026-03-04T01:10:32.660247283Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 4 01:10:32.660699 containerd[1585]: time="2026-03-04T01:10:32.660256029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.661021 containerd[1585]: time="2026-03-04T01:10:32.660271028Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 4 01:10:32.661021 containerd[1585]: time="2026-03-04T01:10:32.660280496Z" level=info msg="NRI interface is disabled by configuration." Mar 4 01:10:32.661021 containerd[1585]: time="2026-03-04T01:10:32.660290614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 4 01:10:32.661069 containerd[1585]: time="2026-03-04T01:10:32.660508672Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 4 01:10:32.661069 containerd[1585]: time="2026-03-04T01:10:32.660557252Z" level=info msg="Connect containerd service" Mar 4 01:10:32.661069 containerd[1585]: time="2026-03-04T01:10:32.660588370Z" level=info msg="using legacy CRI server" Mar 4 01:10:32.661069 containerd[1585]: time="2026-03-04T01:10:32.660594291Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 4 01:10:32.661489 containerd[1585]: time="2026-03-04T01:10:32.661470437Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 4 01:10:32.662360 containerd[1585]: time="2026-03-04T01:10:32.662331444Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 01:10:32.662600 containerd[1585]: time="2026-03-04T01:10:32.662564899Z" level=info msg="Start subscribing containerd event" Mar 4 01:10:32.662663 containerd[1585]: time="2026-03-04T01:10:32.662651792Z" level=info msg="Start recovering state" Mar 4 01:10:32.663064 containerd[1585]: time="2026-03-04T01:10:32.662823552Z" level=info msg="Start event monitor" Mar 4 01:10:32.663064 containerd[1585]: time="2026-03-04T01:10:32.662860090Z" level=info msg="Start snapshots syncer" Mar 4 01:10:32.663064 containerd[1585]: time="2026-03-04T01:10:32.662920343Z" level=info msg="Start cni network conf syncer for default" Mar 4 01:10:32.663064 containerd[1585]: time="2026-03-04T01:10:32.662928658Z" level=info msg="Start streaming server" Mar 4 01:10:32.663468 containerd[1585]: time="2026-03-04T01:10:32.663452046Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 4 01:10:32.663574 containerd[1585]: time="2026-03-04T01:10:32.663560408Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 4 01:10:32.664459 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 4 01:10:32.666629 containerd[1585]: time="2026-03-04T01:10:32.665872408Z" level=info msg="containerd successfully booted in 0.061446s" Mar 4 01:10:32.675484 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 4 01:10:32.679516 systemd[1]: Reached target getty.target - Login Prompts. Mar 4 01:10:32.683527 systemd[1]: Started containerd.service - containerd container runtime. Mar 4 01:10:32.879824 tar[1581]: linux-amd64/README.md Mar 4 01:10:32.899433 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 4 01:10:33.204402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:10:33.210035 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 4 01:10:33.214595 systemd[1]: Startup finished in 12.656s (kernel) + 4.786s (userspace) = 17.442s. Mar 4 01:10:33.281882 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:10:33.813385 kubelet[1670]: E0304 01:10:33.813316 1670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:10:33.817156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:10:33.817464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:10:34.295907 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 4 01:10:34.305588 systemd[1]: Started sshd@0-10.0.0.98:22-10.0.0.1:56320.service - OpenSSH per-connection server daemon (10.0.0.1:56320). Mar 4 01:10:34.377837 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 56320 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:10:34.380704 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:34.397259 systemd-logind[1558]: New session 1 of user core. Mar 4 01:10:34.398950 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 4 01:10:34.408228 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 4 01:10:34.431162 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 4 01:10:34.446656 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 4 01:10:34.452275 (systemd)[1689]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 4 01:10:34.620897 systemd[1689]: Queued start job for default target default.target. Mar 4 01:10:34.621567 systemd[1689]: Created slice app.slice - User Application Slice. Mar 4 01:10:34.621593 systemd[1689]: Reached target paths.target - Paths. Mar 4 01:10:34.621610 systemd[1689]: Reached target timers.target - Timers. Mar 4 01:10:34.633303 systemd[1689]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 4 01:10:34.647212 systemd[1689]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 4 01:10:34.647346 systemd[1689]: Reached target sockets.target - Sockets. Mar 4 01:10:34.647373 systemd[1689]: Reached target basic.target - Basic System. Mar 4 01:10:34.647448 systemd[1689]: Reached target default.target - Main User Target. Mar 4 01:10:34.647506 systemd[1689]: Startup finished in 183ms. Mar 4 01:10:34.647807 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 4 01:10:34.657851 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 4 01:10:34.733889 systemd[1]: Started sshd@1-10.0.0.98:22-10.0.0.1:56332.service - OpenSSH per-connection server daemon (10.0.0.1:56332). Mar 4 01:10:34.794789 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 56332 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:10:34.796449 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:34.808058 systemd-logind[1558]: New session 2 of user core. Mar 4 01:10:34.817730 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 4 01:10:34.889771 sshd[1701]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:34.904257 systemd[1]: Started sshd@2-10.0.0.98:22-10.0.0.1:56344.service - OpenSSH per-connection server daemon (10.0.0.1:56344). Mar 4 01:10:34.905318 systemd[1]: sshd@1-10.0.0.98:22-10.0.0.1:56332.service: Deactivated successfully. Mar 4 01:10:34.911625 systemd[1]: session-2.scope: Deactivated successfully. Mar 4 01:10:34.917075 systemd-logind[1558]: Session 2 logged out. Waiting for processes to exit. Mar 4 01:10:34.919744 systemd-logind[1558]: Removed session 2. Mar 4 01:10:34.955326 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 56344 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:10:34.956660 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:34.975345 systemd-logind[1558]: New session 3 of user core. Mar 4 01:10:34.989956 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 4 01:10:35.053857 sshd[1706]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:35.070578 systemd[1]: Started sshd@3-10.0.0.98:22-10.0.0.1:56360.service - OpenSSH per-connection server daemon (10.0.0.1:56360). Mar 4 01:10:35.071482 systemd[1]: sshd@2-10.0.0.98:22-10.0.0.1:56344.service: Deactivated successfully. Mar 4 01:10:35.075869 systemd-logind[1558]: Session 3 logged out. Waiting for processes to exit. Mar 4 01:10:35.077184 systemd[1]: session-3.scope: Deactivated successfully. Mar 4 01:10:35.080563 systemd-logind[1558]: Removed session 3. Mar 4 01:10:35.116166 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 56360 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:10:35.117448 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:35.126578 systemd-logind[1558]: New session 4 of user core. Mar 4 01:10:35.135592 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 4 01:10:35.201136 sshd[1714]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:35.219710 systemd[1]: Started sshd@4-10.0.0.98:22-10.0.0.1:56370.service - OpenSSH per-connection server daemon (10.0.0.1:56370). Mar 4 01:10:35.220924 systemd[1]: sshd@3-10.0.0.98:22-10.0.0.1:56360.service: Deactivated successfully. Mar 4 01:10:35.223743 systemd[1]: session-4.scope: Deactivated successfully. Mar 4 01:10:35.225960 systemd-logind[1558]: Session 4 logged out. Waiting for processes to exit. Mar 4 01:10:35.232526 systemd-logind[1558]: Removed session 4. Mar 4 01:10:35.261398 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 56370 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:10:35.266333 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:35.274544 systemd-logind[1558]: New session 5 of user core. Mar 4 01:10:35.284738 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 4 01:10:35.360655 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 4 01:10:35.361263 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:10:35.378840 sudo[1729]: pam_unix(sudo:session): session closed for user root Mar 4 01:10:35.383305 sshd[1722]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:35.390543 systemd[1]: Started sshd@5-10.0.0.98:22-10.0.0.1:56386.service - OpenSSH per-connection server daemon (10.0.0.1:56386). Mar 4 01:10:35.391552 systemd[1]: sshd@4-10.0.0.98:22-10.0.0.1:56370.service: Deactivated successfully. Mar 4 01:10:35.395883 systemd[1]: session-5.scope: Deactivated successfully. Mar 4 01:10:35.398285 systemd-logind[1558]: Session 5 logged out. Waiting for processes to exit. Mar 4 01:10:35.400679 systemd-logind[1558]: Removed session 5. Mar 4 01:10:35.448403 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 56386 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:10:35.450736 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:35.460296 systemd-logind[1558]: New session 6 of user core. Mar 4 01:10:35.471643 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 4 01:10:35.543848 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 4 01:10:35.544857 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:10:35.555759 sudo[1739]: pam_unix(sudo:session): session closed for user root Mar 4 01:10:35.567465 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 4 01:10:35.568325 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:10:35.603609 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 4 01:10:35.607672 auditctl[1742]: No rules Mar 4 01:10:35.608529 systemd[1]: audit-rules.service: Deactivated successfully. Mar 4 01:10:35.609047 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 4 01:10:35.613425 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:10:35.683281 augenrules[1761]: No rules Mar 4 01:10:35.685247 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:10:35.690329 sudo[1738]: pam_unix(sudo:session): session closed for user root Mar 4 01:10:35.694868 sshd[1731]: pam_unix(sshd:session): session closed for user core Mar 4 01:10:35.720640 systemd[1]: Started sshd@6-10.0.0.98:22-10.0.0.1:56388.service - OpenSSH per-connection server daemon (10.0.0.1:56388). Mar 4 01:10:35.721841 systemd[1]: sshd@5-10.0.0.98:22-10.0.0.1:56386.service: Deactivated successfully. Mar 4 01:10:35.726706 systemd[1]: session-6.scope: Deactivated successfully. Mar 4 01:10:35.728759 systemd-logind[1558]: Session 6 logged out. Waiting for processes to exit. Mar 4 01:10:35.733548 systemd-logind[1558]: Removed session 6. Mar 4 01:10:35.770161 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 56388 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:10:35.772233 sshd[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:10:35.782480 systemd-logind[1558]: New session 7 of user core. Mar 4 01:10:35.792783 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 4 01:10:35.861860 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 4 01:10:35.862785 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:10:36.361617 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 4 01:10:36.364610 (dockerd)[1793]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 4 01:10:36.863788 dockerd[1793]: time="2026-03-04T01:10:36.863662217Z" level=info msg="Starting up" Mar 4 01:10:37.226845 dockerd[1793]: time="2026-03-04T01:10:37.226605190Z" level=info msg="Loading containers: start." Mar 4 01:10:37.395150 kernel: Initializing XFRM netlink socket Mar 4 01:10:37.534704 systemd-networkd[1249]: docker0: Link UP Mar 4 01:10:37.573341 dockerd[1793]: time="2026-03-04T01:10:37.573257341Z" level=info msg="Loading containers: done." Mar 4 01:10:37.594543 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1798328608-merged.mount: Deactivated successfully. Mar 4 01:10:37.599226 dockerd[1793]: time="2026-03-04T01:10:37.598907418Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 4 01:10:37.599226 dockerd[1793]: time="2026-03-04T01:10:37.599178223Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 4 01:10:37.599392 dockerd[1793]: time="2026-03-04T01:10:37.599356546Z" level=info msg="Daemon has completed initialization" Mar 4 01:10:37.657705 dockerd[1793]: time="2026-03-04T01:10:37.657597211Z" level=info msg="API listen on /run/docker.sock" Mar 4 01:10:37.657750 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 4 01:10:38.347434 containerd[1585]: time="2026-03-04T01:10:38.347283410Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 4 01:10:38.920223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1789928750.mount: Deactivated successfully. Mar 4 01:10:40.742123 containerd[1585]: time="2026-03-04T01:10:40.741991575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:40.743067 containerd[1585]: time="2026-03-04T01:10:40.742914292Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 4 01:10:40.744365 containerd[1585]: time="2026-03-04T01:10:40.744324920Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:40.747779 containerd[1585]: time="2026-03-04T01:10:40.747721996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:40.749059 containerd[1585]: time="2026-03-04T01:10:40.748971235Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 2.401611924s" Mar 4 01:10:40.749136 containerd[1585]: time="2026-03-04T01:10:40.749061924Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 4 01:10:40.750414 containerd[1585]: time="2026-03-04T01:10:40.750370319Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 4 01:10:42.166608 containerd[1585]: time="2026-03-04T01:10:42.166504359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:42.167592 containerd[1585]: time="2026-03-04T01:10:42.167559070Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 4 01:10:42.168828 containerd[1585]: time="2026-03-04T01:10:42.168755992Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:42.171659 containerd[1585]: time="2026-03-04T01:10:42.171588111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:42.172769 containerd[1585]: time="2026-03-04T01:10:42.172717736Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.422291132s" Mar 4 01:10:42.172835 containerd[1585]: time="2026-03-04T01:10:42.172776656Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 4 01:10:42.173599 containerd[1585]: time="2026-03-04T01:10:42.173398777Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 4 01:10:43.556977 containerd[1585]: time="2026-03-04T01:10:43.556899476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:43.557864 containerd[1585]: time="2026-03-04T01:10:43.557817470Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 4 01:10:43.559209 containerd[1585]: time="2026-03-04T01:10:43.559145608Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:43.562523 containerd[1585]: time="2026-03-04T01:10:43.562462590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:43.563769 containerd[1585]: time="2026-03-04T01:10:43.563650888Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.390215443s" Mar 4 01:10:43.563769 containerd[1585]: time="2026-03-04T01:10:43.563756595Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 4 01:10:43.564585 containerd[1585]: time="2026-03-04T01:10:43.564414924Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 4 01:10:44.068711 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 4 01:10:44.077287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:10:44.233962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:10:44.239871 (kubelet)[2023]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:10:44.311652 kubelet[2023]: E0304 01:10:44.311535 2023 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:10:44.317226 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:10:44.317578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:10:44.551843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount585971756.mount: Deactivated successfully. Mar 4 01:10:45.064498 containerd[1585]: time="2026-03-04T01:10:45.064362033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:45.065665 containerd[1585]: time="2026-03-04T01:10:45.065623267Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 4 01:10:45.067202 containerd[1585]: time="2026-03-04T01:10:45.067145962Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:45.070197 containerd[1585]: time="2026-03-04T01:10:45.070038824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:45.071162 containerd[1585]: time="2026-03-04T01:10:45.071038492Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.506591258s" Mar 4 01:10:45.071162 containerd[1585]: time="2026-03-04T01:10:45.071153297Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 4 01:10:45.071858 containerd[1585]: time="2026-03-04T01:10:45.071747907Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 4 01:10:45.520579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4114668069.mount: Deactivated successfully. Mar 4 01:10:46.521285 containerd[1585]: time="2026-03-04T01:10:46.521184234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:46.522430 containerd[1585]: time="2026-03-04T01:10:46.522340492Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 4 01:10:46.523972 containerd[1585]: time="2026-03-04T01:10:46.523902928Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:46.529660 containerd[1585]: time="2026-03-04T01:10:46.529514009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:46.531168 containerd[1585]: time="2026-03-04T01:10:46.530310981Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.458493384s" Mar 4 01:10:46.531168 containerd[1585]: time="2026-03-04T01:10:46.530361425Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 4 01:10:46.531168 containerd[1585]: time="2026-03-04T01:10:46.530808379Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 4 01:10:46.929902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount891921415.mount: Deactivated successfully. Mar 4 01:10:46.937481 containerd[1585]: time="2026-03-04T01:10:46.937378237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:46.938535 containerd[1585]: time="2026-03-04T01:10:46.938411458Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 4 01:10:46.939885 containerd[1585]: time="2026-03-04T01:10:46.939774259Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:46.943148 containerd[1585]: time="2026-03-04T01:10:46.942963312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:46.943677 containerd[1585]: time="2026-03-04T01:10:46.943562791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 412.729024ms" Mar 4 01:10:46.943677 containerd[1585]: time="2026-03-04T01:10:46.943624667Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 4 01:10:46.944518 containerd[1585]: time="2026-03-04T01:10:46.944302842Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 4 01:10:47.408433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2069747681.mount: Deactivated successfully. Mar 4 01:10:50.249673 kernel: hrtimer: interrupt took 7196774 ns Mar 4 01:10:50.463992 containerd[1585]: time="2026-03-04T01:10:50.463809814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:50.465608 containerd[1585]: time="2026-03-04T01:10:50.465523332Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 4 01:10:50.467294 containerd[1585]: time="2026-03-04T01:10:50.467174615Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:50.472471 containerd[1585]: time="2026-03-04T01:10:50.472396622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:10:50.473734 containerd[1585]: time="2026-03-04T01:10:50.473683063Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 3.529348581s" Mar 4 01:10:50.473734 containerd[1585]: time="2026-03-04T01:10:50.473727365Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 4 01:10:54.568233 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 4 01:10:54.577584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:10:54.627777 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 4 01:10:54.628066 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 4 01:10:54.628835 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:10:54.643539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:10:54.673723 systemd[1]: Reloading requested from client PID 2196 ('systemctl') (unit session-7.scope)... Mar 4 01:10:54.673768 systemd[1]: Reloading... Mar 4 01:10:54.771253 zram_generator::config[2235]: No configuration found. Mar 4 01:10:54.903823 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:10:55.037794 systemd[1]: Reloading finished in 363 ms. Mar 4 01:10:55.104922 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 4 01:10:55.105179 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 4 01:10:55.105650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:10:55.117645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:10:55.321324 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:10:55.353020 (kubelet)[2295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:10:55.425803 kubelet[2295]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:10:55.425803 kubelet[2295]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 01:10:55.426413 kubelet[2295]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:10:55.426413 kubelet[2295]: I0304 01:10:55.426332 2295 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 01:10:55.735706 kubelet[2295]: I0304 01:10:55.734516 2295 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 4 01:10:55.735706 kubelet[2295]: I0304 01:10:55.734597 2295 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:10:55.735706 kubelet[2295]: I0304 01:10:55.734904 2295 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 01:10:55.867898 kubelet[2295]: E0304 01:10:55.867791 2295 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:10:55.882410 kubelet[2295]: I0304 01:10:55.882279 2295 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:10:55.900335 kubelet[2295]: E0304 01:10:55.899585 2295 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:10:55.900335 kubelet[2295]: I0304 01:10:55.900312 2295 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 4 01:10:55.914517 kubelet[2295]: I0304 01:10:55.914450 2295 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 4 01:10:55.918889 kubelet[2295]: I0304 01:10:55.917061 2295 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:10:55.919664 kubelet[2295]: I0304 01:10:55.918976 2295 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 4 01:10:55.919664 kubelet[2295]: I0304 01:10:55.919634 2295 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 01:10:55.919664 kubelet[2295]: I0304 01:10:55.919649 2295 container_manager_linux.go:303] "Creating device plugin manager" Mar 4 01:10:55.919973 kubelet[2295]: I0304 01:10:55.919931 2295 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:10:55.935790 kubelet[2295]: I0304 01:10:55.935438 2295 kubelet.go:480] "Attempting to sync node with API server" Mar 4 01:10:55.935790 kubelet[2295]: I0304 01:10:55.935765 2295 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:10:55.937671 kubelet[2295]: I0304 01:10:55.935855 2295 kubelet.go:386] "Adding apiserver pod source" Mar 4 01:10:55.937671 kubelet[2295]: I0304 01:10:55.935902 2295 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:10:55.950198 kubelet[2295]: I0304 01:10:55.950002 2295 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:10:55.950998 kubelet[2295]: I0304 01:10:55.950853 2295 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:10:55.952446 kubelet[2295]: W0304 01:10:55.952363 2295 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 4 01:10:55.956126 kubelet[2295]: E0304 01:10:55.953766 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:10:55.956126 kubelet[2295]: E0304 01:10:55.953795 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:10:55.961610 kubelet[2295]: I0304 01:10:55.961544 2295 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 4 01:10:55.961697 kubelet[2295]: I0304 01:10:55.961666 2295 server.go:1289] "Started kubelet" Mar 4 01:10:55.962322 kubelet[2295]: I0304 01:10:55.961871 2295 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:10:55.962370 kubelet[2295]: I0304 01:10:55.962344 2295 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:10:55.963843 kubelet[2295]: I0304 01:10:55.962847 2295 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:10:55.963843 kubelet[2295]: I0304 01:10:55.963766 2295 server.go:317] "Adding debug handlers to kubelet server" Mar 4 01:10:55.964326 kubelet[2295]: I0304 01:10:55.964256 2295 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 01:10:55.965903 kubelet[2295]: I0304 01:10:55.965268 2295 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:10:55.971473 kubelet[2295]: E0304 01:10:55.971203 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:10:55.971473 kubelet[2295]: E0304 01:10:55.969256 2295 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.98:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.98:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18997e2c0a30576e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-04 01:10:55.961601902 +0000 UTC m=+0.601324421,LastTimestamp:2026-03-04 01:10:55.961601902 +0000 UTC m=+0.601324421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 4 01:10:55.971683 kubelet[2295]: I0304 01:10:55.971557 2295 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 4 01:10:55.971992 kubelet[2295]: I0304 01:10:55.971940 2295 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 4 01:10:55.972187 kubelet[2295]: I0304 01:10:55.972045 2295 reconciler.go:26] "Reconciler: start to sync state" Mar 4 01:10:55.973126 kubelet[2295]: E0304 01:10:55.972947 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:10:55.973843 kubelet[2295]: I0304 01:10:55.973792 2295 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:10:55.976406 kubelet[2295]: I0304 01:10:55.973922 2295 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:10:55.976406 kubelet[2295]: E0304 01:10:55.974499 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="200ms" Mar 4 01:10:55.977012 kubelet[2295]: I0304 01:10:55.976976 2295 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:10:55.980177 kubelet[2295]: E0304 01:10:55.979377 2295 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:10:56.031276 kubelet[2295]: I0304 01:10:56.030306 2295 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 4 01:10:56.033516 kubelet[2295]: I0304 01:10:56.033007 2295 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 4 01:10:56.033640 kubelet[2295]: I0304 01:10:56.033618 2295 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 4 01:10:56.033687 kubelet[2295]: I0304 01:10:56.033655 2295 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:10:56.033687 kubelet[2295]: I0304 01:10:56.033670 2295 kubelet.go:2436] "Starting kubelet main sync loop" Mar 4 01:10:56.034708 kubelet[2295]: E0304 01:10:56.033746 2295 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:10:56.034708 kubelet[2295]: E0304 01:10:56.034566 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:10:56.038797 kubelet[2295]: I0304 01:10:56.038777 2295 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 01:10:56.038978 kubelet[2295]: I0304 01:10:56.038878 2295 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 01:10:56.038978 kubelet[2295]: I0304 01:10:56.038898 2295 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:10:56.044036 kubelet[2295]: I0304 01:10:56.043973 2295 policy_none.go:49] "None policy: Start" Mar 4 01:10:56.044036 kubelet[2295]: I0304 01:10:56.044032 2295 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 4 01:10:56.044194 kubelet[2295]: I0304 01:10:56.044059 2295 state_mem.go:35] "Initializing new in-memory state store" Mar 4 01:10:56.054041 kubelet[2295]: E0304 01:10:56.053964 2295 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:10:56.054414 kubelet[2295]: I0304 01:10:56.054376 2295 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 01:10:56.054574 kubelet[2295]: I0304 01:10:56.054404 2295 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:10:56.055680 kubelet[2295]: I0304 01:10:56.055461 2295 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 01:10:56.056673 kubelet[2295]: E0304 01:10:56.056630 2295 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:10:56.056744 kubelet[2295]: E0304 01:10:56.056709 2295 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 4 01:10:56.172601 kubelet[2295]: I0304 01:10:56.169797 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:10:56.178248 kubelet[2295]: E0304 01:10:56.178063 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="400ms" Mar 4 01:10:56.178248 kubelet[2295]: E0304 01:10:56.178236 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Mar 4 01:10:56.178525 kubelet[2295]: I0304 01:10:56.178300 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:10:56.178525 kubelet[2295]: I0304 01:10:56.178340 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:10:56.178525 kubelet[2295]: I0304 01:10:56.178397 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8b7756271db90386ca23f037904356c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8b7756271db90386ca23f037904356c\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:10:56.178525 kubelet[2295]: I0304 01:10:56.178445 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8b7756271db90386ca23f037904356c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8b7756271db90386ca23f037904356c\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:10:56.178525 kubelet[2295]: I0304 01:10:56.178478 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8b7756271db90386ca23f037904356c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c8b7756271db90386ca23f037904356c\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:10:56.178690 kubelet[2295]: I0304 01:10:56.178502 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:10:56.178690 kubelet[2295]: I0304 01:10:56.178520 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:10:56.178690 kubelet[2295]: I0304 01:10:56.178625 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:10:56.202052 kubelet[2295]: E0304 01:10:56.201944 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:10:56.205353 kubelet[2295]: E0304 01:10:56.205284 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:10:56.210929 kubelet[2295]: E0304 01:10:56.210882 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:10:56.279569 kubelet[2295]: I0304 01:10:56.279505 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 4 01:10:56.384276 kubelet[2295]: I0304 01:10:56.383580 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:10:56.384276 kubelet[2295]: E0304 01:10:56.383992 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Mar 4 01:10:56.506059 kubelet[2295]: E0304 01:10:56.505927 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:56.506059 kubelet[2295]: E0304 01:10:56.505983 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:56.507989 containerd[1585]: time="2026-03-04T01:10:56.507901315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 4 01:10:56.508372 containerd[1585]: time="2026-03-04T01:10:56.507996524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c8b7756271db90386ca23f037904356c,Namespace:kube-system,Attempt:0,}" Mar 4 01:10:56.512066 kubelet[2295]: E0304 01:10:56.512045 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:56.512518 containerd[1585]: time="2026-03-04T01:10:56.512472434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 4 01:10:56.583593 kubelet[2295]: E0304 01:10:56.583428 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="800ms" Mar 4 01:10:56.766051 kubelet[2295]: E0304 01:10:56.765874 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:10:56.789453 kubelet[2295]: I0304 01:10:56.789220 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:10:56.789697 kubelet[2295]: E0304 01:10:56.789660 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Mar 4 01:10:56.917744 kubelet[2295]: E0304 01:10:56.917627 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:10:56.982937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount371068072.mount: Deactivated successfully. Mar 4 01:10:56.998029 containerd[1585]: time="2026-03-04T01:10:56.997972480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:10:57.003188 containerd[1585]: time="2026-03-04T01:10:57.003057900Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 4 01:10:57.004613 containerd[1585]: time="2026-03-04T01:10:57.004432696Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:10:57.005834 containerd[1585]: time="2026-03-04T01:10:57.005795900Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:10:57.007583 containerd[1585]: time="2026-03-04T01:10:57.007539630Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:10:57.009130 containerd[1585]: time="2026-03-04T01:10:57.009023722Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:10:57.010069 containerd[1585]: time="2026-03-04T01:10:57.010027220Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:10:57.012954 containerd[1585]: time="2026-03-04T01:10:57.012864621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:10:57.016729 containerd[1585]: time="2026-03-04T01:10:57.016498228Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 503.964821ms" Mar 4 01:10:57.017973 containerd[1585]: time="2026-03-04T01:10:57.017912040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 509.903174ms" Mar 4 01:10:57.022784 containerd[1585]: time="2026-03-04T01:10:57.022697162Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 514.630597ms" Mar 4 01:10:57.146607 kubelet[2295]: E0304 01:10:57.146338 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:10:57.235344 kubelet[2295]: E0304 01:10:57.234785 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:10:57.386394 kubelet[2295]: E0304 01:10:57.384759 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="1.6s" Mar 4 01:10:57.571206 containerd[1585]: time="2026-03-04T01:10:57.568659860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:10:57.571206 containerd[1585]: time="2026-03-04T01:10:57.568898696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:10:57.571206 containerd[1585]: time="2026-03-04T01:10:57.568922951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:57.571206 containerd[1585]: time="2026-03-04T01:10:57.569386236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:57.577624 containerd[1585]: time="2026-03-04T01:10:57.577536266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:10:57.578065 containerd[1585]: time="2026-03-04T01:10:57.577789589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:10:57.578065 containerd[1585]: time="2026-03-04T01:10:57.577844511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:57.578870 containerd[1585]: time="2026-03-04T01:10:57.578669832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:57.598732 kubelet[2295]: I0304 01:10:57.598704 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:10:57.599917 kubelet[2295]: E0304 01:10:57.599889 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Mar 4 01:10:57.606432 containerd[1585]: time="2026-03-04T01:10:57.606228387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:10:57.606583 containerd[1585]: time="2026-03-04T01:10:57.606482031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:10:57.606676 containerd[1585]: time="2026-03-04T01:10:57.606592577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:57.607036 containerd[1585]: time="2026-03-04T01:10:57.606981192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:10:58.210358 kubelet[2295]: E0304 01:10:58.210065 2295 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.98:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.98:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18997e2c0a30576e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-04 01:10:55.961601902 +0000 UTC m=+0.601324421,LastTimestamp:2026-03-04 01:10:55.961601902 +0000 UTC m=+0.601324421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 4 01:10:58.210903 kubelet[2295]: E0304 01:10:58.210847 2295 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:10:58.319517 containerd[1585]: time="2026-03-04T01:10:58.319417224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"49d2fbb733ad079ae8c121849c86e934e2574e2070263ba2952498216234ed6f\"" Mar 4 01:10:58.321961 kubelet[2295]: E0304 01:10:58.321929 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:58.327208 containerd[1585]: time="2026-03-04T01:10:58.326739261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"c64ea5bc4197010e175b9ee348ad740f3a108c979a639324ed04e7e931037f54\"" Mar 4 01:10:58.328055 kubelet[2295]: E0304 01:10:58.327892 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:58.417287 containerd[1585]: time="2026-03-04T01:10:58.417015644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c8b7756271db90386ca23f037904356c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fb30ce24bfa4216a0aab70c1e9f70b391e9742a744cc4beacb5d2e1554c4463\"" Mar 4 01:10:58.417928 containerd[1585]: time="2026-03-04T01:10:58.417844081Z" level=info msg="CreateContainer within sandbox \"49d2fbb733ad079ae8c121849c86e934e2574e2070263ba2952498216234ed6f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 4 01:10:58.429570 kubelet[2295]: E0304 01:10:58.429403 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:58.453500 containerd[1585]: time="2026-03-04T01:10:58.452718202Z" level=info msg="CreateContainer within sandbox \"c64ea5bc4197010e175b9ee348ad740f3a108c979a639324ed04e7e931037f54\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 4 01:10:58.469509 containerd[1585]: time="2026-03-04T01:10:58.468530699Z" level=info msg="CreateContainer within sandbox \"5fb30ce24bfa4216a0aab70c1e9f70b391e9742a744cc4beacb5d2e1554c4463\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 4 01:10:58.507282 containerd[1585]: time="2026-03-04T01:10:58.506837119Z" level=info msg="CreateContainer within sandbox \"49d2fbb733ad079ae8c121849c86e934e2574e2070263ba2952498216234ed6f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"037497c3cdc3cf77de02cd2d7eaaed3c7abcf25884626e5cd811513c0b5b8dd2\"" Mar 4 01:10:58.508424 containerd[1585]: time="2026-03-04T01:10:58.508317123Z" level=info msg="StartContainer for \"037497c3cdc3cf77de02cd2d7eaaed3c7abcf25884626e5cd811513c0b5b8dd2\"" Mar 4 01:10:58.532545 containerd[1585]: time="2026-03-04T01:10:58.532382141Z" level=info msg="CreateContainer within sandbox \"c64ea5bc4197010e175b9ee348ad740f3a108c979a639324ed04e7e931037f54\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4943438bef74568cd7b6eb9f3e3332f9caa682bcdd11b9666ce9ffb9701ce625\"" Mar 4 01:10:58.534311 containerd[1585]: time="2026-03-04T01:10:58.534134692Z" level=info msg="StartContainer for \"4943438bef74568cd7b6eb9f3e3332f9caa682bcdd11b9666ce9ffb9701ce625\"" Mar 4 01:10:58.548379 containerd[1585]: time="2026-03-04T01:10:58.546370591Z" level=info msg="CreateContainer within sandbox \"5fb30ce24bfa4216a0aab70c1e9f70b391e9742a744cc4beacb5d2e1554c4463\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a9af4d36ca7e646d60510e83a194037aa900384fbef3fe7cb0b8d86f3c0268b5\"" Mar 4 01:10:58.548379 containerd[1585]: time="2026-03-04T01:10:58.547903632Z" level=info msg="StartContainer for \"a9af4d36ca7e646d60510e83a194037aa900384fbef3fe7cb0b8d86f3c0268b5\"" Mar 4 01:10:58.730577 kubelet[2295]: E0304 01:10:58.677382 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:10:58.816067 containerd[1585]: time="2026-03-04T01:10:58.815737180Z" level=info msg="StartContainer for \"a9af4d36ca7e646d60510e83a194037aa900384fbef3fe7cb0b8d86f3c0268b5\" returns successfully" Mar 4 01:10:58.831563 containerd[1585]: time="2026-03-04T01:10:58.830994319Z" level=info msg="StartContainer for \"4943438bef74568cd7b6eb9f3e3332f9caa682bcdd11b9666ce9ffb9701ce625\" returns successfully" Mar 4 01:10:58.855791 containerd[1585]: time="2026-03-04T01:10:58.855704412Z" level=info msg="StartContainer for \"037497c3cdc3cf77de02cd2d7eaaed3c7abcf25884626e5cd811513c0b5b8dd2\" returns successfully" Mar 4 01:10:58.987192 kubelet[2295]: E0304 01:10:58.986793 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="3.2s" Mar 4 01:10:59.205632 kubelet[2295]: I0304 01:10:59.205521 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:10:59.219474 kubelet[2295]: E0304 01:10:59.219139 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:10:59.219474 kubelet[2295]: E0304 01:10:59.219381 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:59.222859 kubelet[2295]: E0304 01:10:59.222386 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:10:59.222859 kubelet[2295]: E0304 01:10:59.222484 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:10:59.228765 kubelet[2295]: E0304 01:10:59.228493 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:10:59.228765 kubelet[2295]: E0304 01:10:59.228680 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:00.232970 kubelet[2295]: E0304 01:11:00.232923 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:11:00.234211 kubelet[2295]: E0304 01:11:00.233153 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:11:00.234211 kubelet[2295]: E0304 01:11:00.233656 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:00.234536 kubelet[2295]: E0304 01:11:00.234480 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:01.239566 kubelet[2295]: E0304 01:11:01.239463 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:11:01.240017 kubelet[2295]: E0304 01:11:01.239640 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:04.473062 kubelet[2295]: E0304 01:11:04.472994 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:11:04.473645 kubelet[2295]: E0304 01:11:04.473509 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:04.668971 kubelet[2295]: E0304 01:11:04.668880 2295 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 4 01:11:05.125064 kubelet[2295]: I0304 01:11:05.124990 2295 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 4 01:11:05.125064 kubelet[2295]: E0304 01:11:05.125031 2295 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 4 01:11:05.174561 kubelet[2295]: I0304 01:11:05.174382 2295 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:11:05.241256 kubelet[2295]: I0304 01:11:05.236620 2295 apiserver.go:52] "Watching apiserver" Mar 4 01:11:05.378285 kubelet[2295]: I0304 01:11:05.376299 2295 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 4 01:11:05.415028 kubelet[2295]: E0304 01:11:05.414960 2295 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:11:05.415858 kubelet[2295]: I0304 01:11:05.415772 2295 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:11:05.418326 kubelet[2295]: E0304 01:11:05.418233 2295 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 4 01:11:05.418326 kubelet[2295]: I0304 01:11:05.418277 2295 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:11:05.420364 kubelet[2295]: E0304 01:11:05.420285 2295 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 4 01:11:07.317743 systemd[1]: Reloading requested from client PID 2586 ('systemctl') (unit session-7.scope)... Mar 4 01:11:07.317794 systemd[1]: Reloading... Mar 4 01:11:07.550195 zram_generator::config[2628]: No configuration found. Mar 4 01:11:07.615884 kubelet[2295]: I0304 01:11:07.615721 2295 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:11:07.674853 kubelet[2295]: E0304 01:11:07.674767 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:07.784322 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:11:07.864957 systemd[1]: Reloading finished in 546 ms. Mar 4 01:11:07.918906 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:11:07.939977 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 01:11:07.940472 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:11:07.955609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:11:08.316445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:11:08.335250 (kubelet)[2680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:11:08.491430 kubelet[2680]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:11:08.491430 kubelet[2680]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 01:11:08.491430 kubelet[2680]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:11:08.491430 kubelet[2680]: I0304 01:11:08.490895 2680 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 01:11:08.513655 kubelet[2680]: I0304 01:11:08.513582 2680 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 4 01:11:08.513655 kubelet[2680]: I0304 01:11:08.513632 2680 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:11:08.519571 kubelet[2680]: I0304 01:11:08.518489 2680 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 01:11:08.521911 kubelet[2680]: I0304 01:11:08.521803 2680 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 4 01:11:08.528009 kubelet[2680]: I0304 01:11:08.526943 2680 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:11:08.533255 kubelet[2680]: E0304 01:11:08.532814 2680 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:11:08.533255 kubelet[2680]: I0304 01:11:08.532854 2680 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 4 01:11:08.540733 kubelet[2680]: I0304 01:11:08.540711 2680 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 4 01:11:08.541694 kubelet[2680]: I0304 01:11:08.541662 2680 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:11:08.541898 kubelet[2680]: I0304 01:11:08.541768 2680 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 4 01:11:08.542036 kubelet[2680]: I0304 01:11:08.542025 2680 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 01:11:08.542218 kubelet[2680]: I0304 01:11:08.542202 2680 container_manager_linux.go:303] "Creating device plugin manager" Mar 4 01:11:08.542951 kubelet[2680]: I0304 01:11:08.542330 2680 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:11:08.542951 kubelet[2680]: I0304 01:11:08.542655 2680 kubelet.go:480] "Attempting to sync node with API server" Mar 4 01:11:08.542951 kubelet[2680]: I0304 01:11:08.542669 2680 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:11:08.542951 kubelet[2680]: I0304 01:11:08.542693 2680 kubelet.go:386] "Adding apiserver pod source" Mar 4 01:11:08.542951 kubelet[2680]: I0304 01:11:08.542703 2680 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:11:08.550661 kubelet[2680]: I0304 01:11:08.545602 2680 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:11:08.550661 kubelet[2680]: I0304 01:11:08.549206 2680 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:11:08.566741 kubelet[2680]: I0304 01:11:08.566640 2680 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 4 01:11:08.566907 kubelet[2680]: I0304 01:11:08.566895 2680 server.go:1289] "Started kubelet" Mar 4 01:11:08.567829 kubelet[2680]: I0304 01:11:08.567596 2680 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:11:08.568516 kubelet[2680]: I0304 01:11:08.567628 2680 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:11:08.569601 kubelet[2680]: I0304 01:11:08.569361 2680 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:11:08.569882 kubelet[2680]: I0304 01:11:08.569810 2680 server.go:317] "Adding debug handlers to kubelet server" Mar 4 01:11:08.570506 kubelet[2680]: I0304 01:11:08.570430 2680 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 01:11:08.571713 kubelet[2680]: I0304 01:11:08.571659 2680 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 4 01:11:08.571850 kubelet[2680]: I0304 01:11:08.571788 2680 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:11:08.572824 kubelet[2680]: I0304 01:11:08.572810 2680 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 4 01:11:08.573290 kubelet[2680]: I0304 01:11:08.573277 2680 reconciler.go:26] "Reconciler: start to sync state" Mar 4 01:11:08.580743 kubelet[2680]: I0304 01:11:08.580456 2680 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:11:08.585459 kubelet[2680]: I0304 01:11:08.585253 2680 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:11:08.585459 kubelet[2680]: I0304 01:11:08.585269 2680 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:11:08.588181 kubelet[2680]: E0304 01:11:08.587970 2680 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:11:08.610181 kubelet[2680]: I0304 01:11:08.609967 2680 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 4 01:11:08.622893 kubelet[2680]: I0304 01:11:08.622853 2680 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 4 01:11:08.622893 kubelet[2680]: I0304 01:11:08.622887 2680 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 4 01:11:08.623013 kubelet[2680]: I0304 01:11:08.622938 2680 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:11:08.623013 kubelet[2680]: I0304 01:11:08.622948 2680 kubelet.go:2436] "Starting kubelet main sync loop" Mar 4 01:11:08.623173 kubelet[2680]: E0304 01:11:08.623044 2680 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:11:08.726977 kubelet[2680]: E0304 01:11:08.726882 2680 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 4 01:11:08.894922 kubelet[2680]: I0304 01:11:08.893816 2680 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 01:11:08.894922 kubelet[2680]: I0304 01:11:08.893876 2680 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 01:11:08.894922 kubelet[2680]: I0304 01:11:08.893915 2680 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:11:08.894922 kubelet[2680]: I0304 01:11:08.894353 2680 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 4 01:11:08.894922 kubelet[2680]: I0304 01:11:08.894365 2680 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 4 01:11:08.894922 kubelet[2680]: I0304 01:11:08.894382 2680 policy_none.go:49] "None policy: Start" Mar 4 01:11:08.894922 kubelet[2680]: I0304 01:11:08.894394 2680 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 4 01:11:08.894922 kubelet[2680]: I0304 01:11:08.894405 2680 state_mem.go:35] "Initializing new in-memory state store" Mar 4 01:11:08.894922 kubelet[2680]: I0304 01:11:08.894487 2680 state_mem.go:75] "Updated machine memory state" Mar 4 01:11:08.903313 kubelet[2680]: E0304 01:11:08.897496 2680 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:11:08.903313 kubelet[2680]: I0304 01:11:08.897939 2680 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 01:11:08.903313 kubelet[2680]: I0304 01:11:08.897961 2680 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:11:08.903313 kubelet[2680]: I0304 01:11:08.899513 2680 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 01:11:08.906064 kubelet[2680]: E0304 01:11:08.905001 2680 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:11:08.929186 kubelet[2680]: I0304 01:11:08.928882 2680 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:11:08.929456 kubelet[2680]: I0304 01:11:08.928515 2680 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:11:08.929553 kubelet[2680]: I0304 01:11:08.929529 2680 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:11:08.943973 kubelet[2680]: E0304 01:11:08.943853 2680 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:11:08.978927 kubelet[2680]: I0304 01:11:08.978814 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8b7756271db90386ca23f037904356c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8b7756271db90386ca23f037904356c\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:11:08.979196 kubelet[2680]: I0304 01:11:08.978935 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:11:08.979196 kubelet[2680]: I0304 01:11:08.978985 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:11:08.979196 kubelet[2680]: I0304 01:11:08.979050 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8b7756271db90386ca23f037904356c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8b7756271db90386ca23f037904356c\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:11:08.979196 kubelet[2680]: I0304 01:11:08.979172 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8b7756271db90386ca23f037904356c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c8b7756271db90386ca23f037904356c\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:11:08.979392 kubelet[2680]: I0304 01:11:08.979207 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:11:08.979392 kubelet[2680]: I0304 01:11:08.979235 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:11:08.979392 kubelet[2680]: I0304 01:11:08.979257 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:11:08.979392 kubelet[2680]: I0304 01:11:08.979271 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 4 01:11:09.042471 kubelet[2680]: I0304 01:11:09.042399 2680 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 4 01:11:09.142952 kubelet[2680]: I0304 01:11:09.142483 2680 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 4 01:11:09.142952 kubelet[2680]: I0304 01:11:09.142682 2680 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 4 01:11:09.245005 kubelet[2680]: E0304 01:11:09.244815 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:09.245972 kubelet[2680]: E0304 01:11:09.245216 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:09.245972 kubelet[2680]: E0304 01:11:09.245464 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:09.547497 kubelet[2680]: I0304 01:11:09.547296 2680 apiserver.go:52] "Watching apiserver" Mar 4 01:11:09.573270 kubelet[2680]: I0304 01:11:09.573166 2680 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 4 01:11:09.689753 kubelet[2680]: E0304 01:11:09.689299 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:09.689753 kubelet[2680]: E0304 01:11:09.689623 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:09.689753 kubelet[2680]: E0304 01:11:09.689623 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:10.127360 kubelet[2680]: I0304 01:11:10.127073 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.1270569249999998 podStartE2EDuration="3.127056925s" podCreationTimestamp="2026-03-04 01:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:11:10.112256131 +0000 UTC m=+1.730587603" watchObservedRunningTime="2026-03-04 01:11:10.127056925 +0000 UTC m=+1.745388397" Mar 4 01:11:10.128231 kubelet[2680]: I0304 01:11:10.128061 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.128051416 podStartE2EDuration="2.128051416s" podCreationTimestamp="2026-03-04 01:11:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:11:10.127671586 +0000 UTC m=+1.746003058" watchObservedRunningTime="2026-03-04 01:11:10.128051416 +0000 UTC m=+1.746382917" Mar 4 01:11:10.157926 kubelet[2680]: I0304 01:11:10.157719 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.157336726 podStartE2EDuration="2.157336726s" podCreationTimestamp="2026-03-04 01:11:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:11:10.140544004 +0000 UTC m=+1.758875475" watchObservedRunningTime="2026-03-04 01:11:10.157336726 +0000 UTC m=+1.775668198" Mar 4 01:11:10.699386 kubelet[2680]: E0304 01:11:10.699326 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:10.701242 kubelet[2680]: E0304 01:11:10.700858 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:11.228854 kubelet[2680]: E0304 01:11:11.228766 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:11.714576 kubelet[2680]: E0304 01:11:11.711677 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:11.715314 kubelet[2680]: E0304 01:11:11.714057 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:12.716972 kubelet[2680]: E0304 01:11:12.716848 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:13.056236 kubelet[2680]: I0304 01:11:13.055440 2680 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 4 01:11:13.062039 containerd[1585]: time="2026-03-04T01:11:13.056260647Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 4 01:11:13.063010 kubelet[2680]: I0304 01:11:13.059600 2680 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 4 01:11:13.719851 kubelet[2680]: E0304 01:11:13.719807 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:13.868658 kubelet[2680]: I0304 01:11:13.868501 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xtpl\" (UniqueName: \"kubernetes.io/projected/96e6a563-e182-49bf-adc1-2970ecab8626-kube-api-access-4xtpl\") pod \"tigera-operator-6bf85f8dd-44v5f\" (UID: \"96e6a563-e182-49bf-adc1-2970ecab8626\") " pod="tigera-operator/tigera-operator-6bf85f8dd-44v5f" Mar 4 01:11:13.868658 kubelet[2680]: I0304 01:11:13.868588 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/96e6a563-e182-49bf-adc1-2970ecab8626-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-44v5f\" (UID: \"96e6a563-e182-49bf-adc1-2970ecab8626\") " pod="tigera-operator/tigera-operator-6bf85f8dd-44v5f" Mar 4 01:11:13.969288 kubelet[2680]: I0304 01:11:13.969235 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d42d540-a5e3-4d45-b8f3-9c4d81187d86-kube-proxy\") pod \"kube-proxy-kqr78\" (UID: \"9d42d540-a5e3-4d45-b8f3-9c4d81187d86\") " pod="kube-system/kube-proxy-kqr78" Mar 4 01:11:13.969288 kubelet[2680]: I0304 01:11:13.969290 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d42d540-a5e3-4d45-b8f3-9c4d81187d86-xtables-lock\") pod \"kube-proxy-kqr78\" (UID: \"9d42d540-a5e3-4d45-b8f3-9c4d81187d86\") " pod="kube-system/kube-proxy-kqr78" Mar 4 01:11:13.969288 kubelet[2680]: I0304 01:11:13.969311 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnr4w\" (UniqueName: \"kubernetes.io/projected/9d42d540-a5e3-4d45-b8f3-9c4d81187d86-kube-api-access-nnr4w\") pod \"kube-proxy-kqr78\" (UID: \"9d42d540-a5e3-4d45-b8f3-9c4d81187d86\") " pod="kube-system/kube-proxy-kqr78" Mar 4 01:11:13.969741 kubelet[2680]: I0304 01:11:13.969383 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d42d540-a5e3-4d45-b8f3-9c4d81187d86-lib-modules\") pod \"kube-proxy-kqr78\" (UID: \"9d42d540-a5e3-4d45-b8f3-9c4d81187d86\") " pod="kube-system/kube-proxy-kqr78" Mar 4 01:11:14.128128 containerd[1585]: time="2026-03-04T01:11:14.127651725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-44v5f,Uid:96e6a563-e182-49bf-adc1-2970ecab8626,Namespace:tigera-operator,Attempt:0,}" Mar 4 01:11:14.207720 kubelet[2680]: E0304 01:11:14.207619 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:14.208525 containerd[1585]: time="2026-03-04T01:11:14.208484339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kqr78,Uid:9d42d540-a5e3-4d45-b8f3-9c4d81187d86,Namespace:kube-system,Attempt:0,}" Mar 4 01:11:14.236522 containerd[1585]: time="2026-03-04T01:11:14.234540623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:11:14.236522 containerd[1585]: time="2026-03-04T01:11:14.236126080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:11:14.236522 containerd[1585]: time="2026-03-04T01:11:14.236149654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:14.236522 containerd[1585]: time="2026-03-04T01:11:14.236385860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:14.610245 containerd[1585]: time="2026-03-04T01:11:14.609145021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:11:14.610245 containerd[1585]: time="2026-03-04T01:11:14.609297854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:11:14.610245 containerd[1585]: time="2026-03-04T01:11:14.609320907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:14.610245 containerd[1585]: time="2026-03-04T01:11:14.609676384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:14.630510 containerd[1585]: time="2026-03-04T01:11:14.630414357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-44v5f,Uid:96e6a563-e182-49bf-adc1-2970ecab8626,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a65c62412267ee7d0bfff74734d9a5b8cc42e61e3b9cd8c83ffe73f39b1c0772\"" Mar 4 01:11:14.633204 containerd[1585]: time="2026-03-04T01:11:14.632820066Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 4 01:11:14.923966 containerd[1585]: time="2026-03-04T01:11:14.923804652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kqr78,Uid:9d42d540-a5e3-4d45-b8f3-9c4d81187d86,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee3e4963860d06f50685e2a0fde648ec9825e641519cc5c38d1cc35243e3b0b9\"" Mar 4 01:11:14.925158 kubelet[2680]: E0304 01:11:14.924983 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:14.931270 containerd[1585]: time="2026-03-04T01:11:14.931162857Z" level=info msg="CreateContainer within sandbox \"ee3e4963860d06f50685e2a0fde648ec9825e641519cc5c38d1cc35243e3b0b9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 4 01:11:14.953964 containerd[1585]: time="2026-03-04T01:11:14.953835864Z" level=info msg="CreateContainer within sandbox \"ee3e4963860d06f50685e2a0fde648ec9825e641519cc5c38d1cc35243e3b0b9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"19e1c24394a9203808863f74a680c045a6578030007973e70c099ecba1fc60eb\"" Mar 4 01:11:14.954981 containerd[1585]: time="2026-03-04T01:11:14.954930540Z" level=info msg="StartContainer for \"19e1c24394a9203808863f74a680c045a6578030007973e70c099ecba1fc60eb\"" Mar 4 01:11:15.327391 containerd[1585]: time="2026-03-04T01:11:15.327308818Z" level=info msg="StartContainer for \"19e1c24394a9203808863f74a680c045a6578030007973e70c099ecba1fc60eb\" returns successfully" Mar 4 01:11:15.650597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2066732309.mount: Deactivated successfully. Mar 4 01:11:15.809296 kubelet[2680]: E0304 01:11:15.808678 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:15.880136 kubelet[2680]: I0304 01:11:15.879912 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kqr78" podStartSLOduration=2.879890597 podStartE2EDuration="2.879890597s" podCreationTimestamp="2026-03-04 01:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:11:15.879484867 +0000 UTC m=+7.497816359" watchObservedRunningTime="2026-03-04 01:11:15.879890597 +0000 UTC m=+7.498222089" Mar 4 01:11:16.267181 kubelet[2680]: E0304 01:11:16.265274 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:16.810776 kubelet[2680]: E0304 01:11:16.810707 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:17.170843 containerd[1585]: time="2026-03-04T01:11:17.170668431Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:17.171540 containerd[1585]: time="2026-03-04T01:11:17.171315057Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 4 01:11:17.172938 containerd[1585]: time="2026-03-04T01:11:17.172873849Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:17.176500 containerd[1585]: time="2026-03-04T01:11:17.176465487Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:17.177785 containerd[1585]: time="2026-03-04T01:11:17.177691605Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.544843066s" Mar 4 01:11:17.177785 containerd[1585]: time="2026-03-04T01:11:17.177733222Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 4 01:11:17.182764 containerd[1585]: time="2026-03-04T01:11:17.182721551Z" level=info msg="CreateContainer within sandbox \"a65c62412267ee7d0bfff74734d9a5b8cc42e61e3b9cd8c83ffe73f39b1c0772\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 4 01:11:17.203169 containerd[1585]: time="2026-03-04T01:11:17.202942690Z" level=info msg="CreateContainer within sandbox \"a65c62412267ee7d0bfff74734d9a5b8cc42e61e3b9cd8c83ffe73f39b1c0772\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"52cf101cf816626e033f592ef63cd7ad4ee49672dfdc09ea96eb55d8780e9c2e\"" Mar 4 01:11:17.205718 containerd[1585]: time="2026-03-04T01:11:17.203812179Z" level=info msg="StartContainer for \"52cf101cf816626e033f592ef63cd7ad4ee49672dfdc09ea96eb55d8780e9c2e\"" Mar 4 01:11:17.254991 systemd[1]: run-containerd-runc-k8s.io-52cf101cf816626e033f592ef63cd7ad4ee49672dfdc09ea96eb55d8780e9c2e-runc.fSZZM7.mount: Deactivated successfully. Mar 4 01:11:17.820399 kubelet[2680]: E0304 01:11:17.819422 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:17.822247 update_engine[1564]: I20260304 01:11:17.821307 1564 update_attempter.cc:509] Updating boot flags... Mar 4 01:11:18.042265 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3034) Mar 4 01:11:18.105724 containerd[1585]: time="2026-03-04T01:11:18.099817980Z" level=info msg="StartContainer for \"52cf101cf816626e033f592ef63cd7ad4ee49672dfdc09ea96eb55d8780e9c2e\" returns successfully" Mar 4 01:11:23.566660 sudo[1774]: pam_unix(sudo:session): session closed for user root Mar 4 01:11:23.571057 sshd[1768]: pam_unix(sshd:session): session closed for user core Mar 4 01:11:23.574558 systemd[1]: sshd@6-10.0.0.98:22-10.0.0.1:56388.service: Deactivated successfully. Mar 4 01:11:23.583556 systemd[1]: session-7.scope: Deactivated successfully. Mar 4 01:11:23.587050 systemd-logind[1558]: Session 7 logged out. Waiting for processes to exit. Mar 4 01:11:23.588766 systemd-logind[1558]: Removed session 7. Mar 4 01:11:25.489433 kubelet[2680]: I0304 01:11:25.489317 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-44v5f" podStartSLOduration=9.943098759 podStartE2EDuration="12.489291325s" podCreationTimestamp="2026-03-04 01:11:13 +0000 UTC" firstStartedPulling="2026-03-04 01:11:14.632397077 +0000 UTC m=+6.250728550" lastFinishedPulling="2026-03-04 01:11:17.178589643 +0000 UTC m=+8.796921116" observedRunningTime="2026-03-04 01:11:18.8360668 +0000 UTC m=+10.454398292" watchObservedRunningTime="2026-03-04 01:11:25.489291325 +0000 UTC m=+17.107622797" Mar 4 01:11:25.549379 kubelet[2680]: I0304 01:11:25.549203 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzpp5\" (UniqueName: \"kubernetes.io/projected/98a8c790-e67f-4710-9921-7e745524a1a5-kube-api-access-bzpp5\") pod \"calico-typha-74f4996bf-9wvfh\" (UID: \"98a8c790-e67f-4710-9921-7e745524a1a5\") " pod="calico-system/calico-typha-74f4996bf-9wvfh" Mar 4 01:11:25.549379 kubelet[2680]: I0304 01:11:25.549273 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98a8c790-e67f-4710-9921-7e745524a1a5-tigera-ca-bundle\") pod \"calico-typha-74f4996bf-9wvfh\" (UID: \"98a8c790-e67f-4710-9921-7e745524a1a5\") " pod="calico-system/calico-typha-74f4996bf-9wvfh" Mar 4 01:11:25.549379 kubelet[2680]: I0304 01:11:25.549303 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/98a8c790-e67f-4710-9921-7e745524a1a5-typha-certs\") pod \"calico-typha-74f4996bf-9wvfh\" (UID: \"98a8c790-e67f-4710-9921-7e745524a1a5\") " pod="calico-system/calico-typha-74f4996bf-9wvfh" Mar 4 01:11:25.650535 kubelet[2680]: I0304 01:11:25.650478 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a7a7849-374c-4d7f-aa9a-c566916ce62f-lib-modules\") pod \"calico-node-kkd5t\" (UID: \"1a7a7849-374c-4d7f-aa9a-c566916ce62f\") " pod="calico-system/calico-node-kkd5t" Mar 4 01:11:25.650535 kubelet[2680]: I0304 01:11:25.650535 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/1a7a7849-374c-4d7f-aa9a-c566916ce62f-nodeproc\") pod \"calico-node-kkd5t\" (UID: \"1a7a7849-374c-4d7f-aa9a-c566916ce62f\") " pod="calico-system/calico-node-kkd5t" Mar 4 01:11:25.650742 kubelet[2680]: I0304 01:11:25.650554 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1a7a7849-374c-4d7f-aa9a-c566916ce62f-var-lib-calico\") pod \"calico-node-kkd5t\" (UID: \"1a7a7849-374c-4d7f-aa9a-c566916ce62f\") " pod="calico-system/calico-node-kkd5t" Mar 4 01:11:25.650742 kubelet[2680]: I0304 01:11:25.650569 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/1a7a7849-374c-4d7f-aa9a-c566916ce62f-bpffs\") pod \"calico-node-kkd5t\" (UID: \"1a7a7849-374c-4d7f-aa9a-c566916ce62f\") " pod="calico-system/calico-node-kkd5t" Mar 4 01:11:25.650742 kubelet[2680]: I0304 01:11:25.650581 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1a7a7849-374c-4d7f-aa9a-c566916ce62f-cni-bin-dir\") pod \"calico-node-kkd5t\" (UID: \"1a7a7849-374c-4d7f-aa9a-c566916ce62f\") " pod="calico-system/calico-node-kkd5t" Mar 4 01:11:25.650742 kubelet[2680]: I0304 01:11:25.650593 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1a7a7849-374c-4d7f-aa9a-c566916ce62f-cni-net-dir\") pod \"calico-node-kkd5t\" (UID: \"1a7a7849-374c-4d7f-aa9a-c566916ce62f\") " pod="calico-system/calico-node-kkd5t" Mar 4 01:11:25.650742 kubelet[2680]: I0304 01:11:25.650607 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1a7a7849-374c-4d7f-aa9a-c566916ce62f-sys-fs\") pod \"calico-node-kkd5t\" (UID: \"1a7a7849-374c-4d7f-aa9a-c566916ce62f\") " pod="calico-system/calico-node-kkd5t" Mar 4 01:11:25.651007 kubelet[2680]: I0304 01:11:25.650630 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv5bl\" (UniqueName: \"kubernetes.io/projected/1a7a7849-374c-4d7f-aa9a-c566916ce62f-kube-api-access-pv5bl\") pod \"calico-node-kkd5t\" (UID: \"1a7a7849-374c-4d7f-aa9a-c566916ce62f\") " pod="calico-system/calico-node-kkd5t" Mar 4 01:11:25.651007 kubelet[2680]: I0304 01:11:25.650655 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1a7a7849-374c-4d7f-aa9a-c566916ce62f-node-certs\") pod \"calico-node-kkd5t\" (UID: \"1a7a7849-374c-4d7f-aa9a-c566916ce62f\") " pod="calico-system/calico-node-kkd5t" Mar 4 01:11:25.651007 kubelet[2680]: I0304 01:11:25.650691 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1a7a7849-374c-4d7f-aa9a-c566916ce62f-policysync\") pod \"calico-node-kkd5t\" (UID: \"1a7a7849-374c-4d7f-aa9a-c566916ce62f\") " pod="calico-system/calico-node-kkd5t" Mar 4 01:11:25.651007 kubelet[2680]: I0304 01:11:25.650731 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1a7a7849-374c-4d7f-aa9a-c566916ce62f-flexvol-driver-host\") pod \"calico-node-kkd5t\" (UID: \"1a7a7849-374c-4d7f-aa9a-c566916ce62f\") " pod="calico-system/calico-node-kkd5t" Mar 4 01:11:25.651007 kubelet[2680]: I0304 01:11:25.650825 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a7a7849-374c-4d7f-aa9a-c566916ce62f-tigera-ca-bundle\") pod \"calico-node-kkd5t\" (UID: \"1a7a7849-374c-4d7f-aa9a-c566916ce62f\") " pod="calico-system/calico-node-kkd5t" Mar 4 01:11:25.652473 kubelet[2680]: I0304 01:11:25.652210 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1a7a7849-374c-4d7f-aa9a-c566916ce62f-var-run-calico\") pod \"calico-node-kkd5t\" (UID: \"1a7a7849-374c-4d7f-aa9a-c566916ce62f\") " pod="calico-system/calico-node-kkd5t" Mar 4 01:11:25.652587 kubelet[2680]: I0304 01:11:25.652486 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a7a7849-374c-4d7f-aa9a-c566916ce62f-xtables-lock\") pod \"calico-node-kkd5t\" (UID: \"1a7a7849-374c-4d7f-aa9a-c566916ce62f\") " pod="calico-system/calico-node-kkd5t" Mar 4 01:11:25.652587 kubelet[2680]: I0304 01:11:25.652530 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1a7a7849-374c-4d7f-aa9a-c566916ce62f-cni-log-dir\") pod \"calico-node-kkd5t\" (UID: \"1a7a7849-374c-4d7f-aa9a-c566916ce62f\") " pod="calico-system/calico-node-kkd5t" Mar 4 01:11:25.661778 kubelet[2680]: E0304 01:11:25.661483 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rchx" podUID="d7d28883-d73e-4c89-9e3a-693929b745d0" Mar 4 01:11:25.753599 kubelet[2680]: I0304 01:11:25.753488 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d7d28883-d73e-4c89-9e3a-693929b745d0-registration-dir\") pod \"csi-node-driver-4rchx\" (UID: \"d7d28883-d73e-4c89-9e3a-693929b745d0\") " pod="calico-system/csi-node-driver-4rchx" Mar 4 01:11:25.753698 kubelet[2680]: I0304 01:11:25.753633 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7d28883-d73e-4c89-9e3a-693929b745d0-kubelet-dir\") pod \"csi-node-driver-4rchx\" (UID: \"d7d28883-d73e-4c89-9e3a-693929b745d0\") " pod="calico-system/csi-node-driver-4rchx" Mar 4 01:11:25.753698 kubelet[2680]: I0304 01:11:25.753681 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d7d28883-d73e-4c89-9e3a-693929b745d0-socket-dir\") pod \"csi-node-driver-4rchx\" (UID: \"d7d28883-d73e-4c89-9e3a-693929b745d0\") " pod="calico-system/csi-node-driver-4rchx" Mar 4 01:11:25.753745 kubelet[2680]: I0304 01:11:25.753733 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d7d28883-d73e-4c89-9e3a-693929b745d0-varrun\") pod \"csi-node-driver-4rchx\" (UID: \"d7d28883-d73e-4c89-9e3a-693929b745d0\") " pod="calico-system/csi-node-driver-4rchx" Mar 4 01:11:25.753766 kubelet[2680]: I0304 01:11:25.753755 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7jg2\" (UniqueName: \"kubernetes.io/projected/d7d28883-d73e-4c89-9e3a-693929b745d0-kube-api-access-m7jg2\") pod \"csi-node-driver-4rchx\" (UID: \"d7d28883-d73e-4c89-9e3a-693929b745d0\") " pod="calico-system/csi-node-driver-4rchx" Mar 4 01:11:25.755873 kubelet[2680]: E0304 01:11:25.755681 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.755873 kubelet[2680]: W0304 01:11:25.755696 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.755873 kubelet[2680]: E0304 01:11:25.755711 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.756154 kubelet[2680]: E0304 01:11:25.756140 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.756183 kubelet[2680]: W0304 01:11:25.756157 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.756183 kubelet[2680]: E0304 01:11:25.756176 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.756934 kubelet[2680]: E0304 01:11:25.756692 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.756934 kubelet[2680]: W0304 01:11:25.756706 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.756934 kubelet[2680]: E0304 01:11:25.756720 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.757168 kubelet[2680]: E0304 01:11:25.757154 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.757168 kubelet[2680]: W0304 01:11:25.757166 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.757213 kubelet[2680]: E0304 01:11:25.757178 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.757652 kubelet[2680]: E0304 01:11:25.757564 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.757652 kubelet[2680]: W0304 01:11:25.757603 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.757652 kubelet[2680]: E0304 01:11:25.757619 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.761285 kubelet[2680]: E0304 01:11:25.761213 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.761285 kubelet[2680]: W0304 01:11:25.761254 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.761285 kubelet[2680]: E0304 01:11:25.761270 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.766146 kubelet[2680]: E0304 01:11:25.764345 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.766146 kubelet[2680]: W0304 01:11:25.764384 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.766146 kubelet[2680]: E0304 01:11:25.764399 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.766146 kubelet[2680]: E0304 01:11:25.765624 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.766146 kubelet[2680]: W0304 01:11:25.765635 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.766146 kubelet[2680]: E0304 01:11:25.765646 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.803285 kubelet[2680]: E0304 01:11:25.803189 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:25.804313 containerd[1585]: time="2026-03-04T01:11:25.804057374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74f4996bf-9wvfh,Uid:98a8c790-e67f-4710-9921-7e745524a1a5,Namespace:calico-system,Attempt:0,}" Mar 4 01:11:25.837921 containerd[1585]: time="2026-03-04T01:11:25.837228793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:11:25.837921 containerd[1585]: time="2026-03-04T01:11:25.837328237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:11:25.837921 containerd[1585]: time="2026-03-04T01:11:25.837344288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:25.837921 containerd[1585]: time="2026-03-04T01:11:25.837456446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:25.855296 kubelet[2680]: E0304 01:11:25.855196 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.855296 kubelet[2680]: W0304 01:11:25.855216 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.855296 kubelet[2680]: E0304 01:11:25.855234 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.855542 kubelet[2680]: E0304 01:11:25.855446 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.855542 kubelet[2680]: W0304 01:11:25.855454 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.855542 kubelet[2680]: E0304 01:11:25.855462 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.855836 kubelet[2680]: E0304 01:11:25.855801 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.855836 kubelet[2680]: W0304 01:11:25.855823 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.855836 kubelet[2680]: E0304 01:11:25.855832 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.856228 kubelet[2680]: E0304 01:11:25.856197 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.856228 kubelet[2680]: W0304 01:11:25.856221 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.856228 kubelet[2680]: E0304 01:11:25.856229 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.856491 kubelet[2680]: E0304 01:11:25.856455 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.856491 kubelet[2680]: W0304 01:11:25.856484 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.856491 kubelet[2680]: E0304 01:11:25.856493 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.856990 kubelet[2680]: E0304 01:11:25.856928 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.856990 kubelet[2680]: W0304 01:11:25.856954 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.856990 kubelet[2680]: E0304 01:11:25.856990 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.857727 kubelet[2680]: E0304 01:11:25.857682 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.857727 kubelet[2680]: W0304 01:11:25.857712 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.857727 kubelet[2680]: E0304 01:11:25.857721 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.858068 kubelet[2680]: E0304 01:11:25.858025 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.858068 kubelet[2680]: W0304 01:11:25.858052 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.858068 kubelet[2680]: E0304 01:11:25.858061 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.858426 kubelet[2680]: E0304 01:11:25.858400 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.858426 kubelet[2680]: W0304 01:11:25.858421 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.858477 kubelet[2680]: E0304 01:11:25.858430 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.858720 kubelet[2680]: E0304 01:11:25.858695 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.858720 kubelet[2680]: W0304 01:11:25.858717 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.858775 kubelet[2680]: E0304 01:11:25.858726 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.859128 kubelet[2680]: E0304 01:11:25.859036 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.859128 kubelet[2680]: W0304 01:11:25.859061 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.859128 kubelet[2680]: E0304 01:11:25.859069 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.859428 kubelet[2680]: E0304 01:11:25.859403 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.859428 kubelet[2680]: W0304 01:11:25.859426 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.859488 kubelet[2680]: E0304 01:11:25.859434 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.859667 kubelet[2680]: E0304 01:11:25.859617 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.859667 kubelet[2680]: W0304 01:11:25.859650 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.859667 kubelet[2680]: E0304 01:11:25.859658 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.859884 kubelet[2680]: E0304 01:11:25.859856 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.859907 kubelet[2680]: W0304 01:11:25.859883 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.859907 kubelet[2680]: E0304 01:11:25.859891 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.860303 kubelet[2680]: E0304 01:11:25.860277 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.860303 kubelet[2680]: W0304 01:11:25.860300 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.860355 kubelet[2680]: E0304 01:11:25.860308 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.860634 kubelet[2680]: E0304 01:11:25.860611 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.860690 kubelet[2680]: W0304 01:11:25.860633 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.860690 kubelet[2680]: E0304 01:11:25.860688 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.860926 containerd[1585]: time="2026-03-04T01:11:25.860877052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kkd5t,Uid:1a7a7849-374c-4d7f-aa9a-c566916ce62f,Namespace:calico-system,Attempt:0,}" Mar 4 01:11:25.860997 kubelet[2680]: E0304 01:11:25.860925 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.860997 kubelet[2680]: W0304 01:11:25.860933 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.860997 kubelet[2680]: E0304 01:11:25.860940 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.861447 kubelet[2680]: E0304 01:11:25.861405 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.861487 kubelet[2680]: W0304 01:11:25.861464 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.861487 kubelet[2680]: E0304 01:11:25.861473 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.861825 kubelet[2680]: E0304 01:11:25.861733 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.861825 kubelet[2680]: W0304 01:11:25.861810 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.861825 kubelet[2680]: E0304 01:11:25.861818 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.862326 kubelet[2680]: E0304 01:11:25.862302 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.862326 kubelet[2680]: W0304 01:11:25.862323 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.862388 kubelet[2680]: E0304 01:11:25.862332 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.862817 kubelet[2680]: E0304 01:11:25.862773 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.862817 kubelet[2680]: W0304 01:11:25.862799 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.862817 kubelet[2680]: E0304 01:11:25.862808 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.863233 kubelet[2680]: E0304 01:11:25.863207 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.863233 kubelet[2680]: W0304 01:11:25.863231 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.863296 kubelet[2680]: E0304 01:11:25.863240 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.863552 kubelet[2680]: E0304 01:11:25.863527 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.863552 kubelet[2680]: W0304 01:11:25.863550 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.863603 kubelet[2680]: E0304 01:11:25.863558 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.863897 kubelet[2680]: E0304 01:11:25.863850 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.863897 kubelet[2680]: W0304 01:11:25.863877 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.863897 kubelet[2680]: E0304 01:11:25.863886 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.865293 kubelet[2680]: E0304 01:11:25.865249 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.865293 kubelet[2680]: W0304 01:11:25.865278 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.865293 kubelet[2680]: E0304 01:11:25.865288 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.874793 kubelet[2680]: E0304 01:11:25.874659 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:25.874793 kubelet[2680]: W0304 01:11:25.874680 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:25.874793 kubelet[2680]: E0304 01:11:25.874728 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:25.908668 containerd[1585]: time="2026-03-04T01:11:25.908306954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:11:25.908668 containerd[1585]: time="2026-03-04T01:11:25.908406539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:11:25.908668 containerd[1585]: time="2026-03-04T01:11:25.908426868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:25.908837 containerd[1585]: time="2026-03-04T01:11:25.908567088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:25.912868 containerd[1585]: time="2026-03-04T01:11:25.912806462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74f4996bf-9wvfh,Uid:98a8c790-e67f-4710-9921-7e745524a1a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"5cf2e1492e6c78f11270ada5b06b9f18050152bbec3627741f22c4e89759f16a\"" Mar 4 01:11:25.913791 kubelet[2680]: E0304 01:11:25.913742 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:25.916904 containerd[1585]: time="2026-03-04T01:11:25.916837500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 4 01:11:25.972397 containerd[1585]: time="2026-03-04T01:11:25.972314624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kkd5t,Uid:1a7a7849-374c-4d7f-aa9a-c566916ce62f,Namespace:calico-system,Attempt:0,} returns sandbox id \"530b4cc6dcb8789c7b422b8c06299687860d2981a22ddafaf7ae15b1ff61a776\"" Mar 4 01:11:27.304271 containerd[1585]: time="2026-03-04T01:11:27.304208157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:27.305355 containerd[1585]: time="2026-03-04T01:11:27.305287004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 4 01:11:27.306567 containerd[1585]: time="2026-03-04T01:11:27.306504334Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:27.309404 containerd[1585]: time="2026-03-04T01:11:27.309346794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:27.310073 containerd[1585]: time="2026-03-04T01:11:27.310017486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.39302043s" Mar 4 01:11:27.310073 containerd[1585]: time="2026-03-04T01:11:27.310063631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 4 01:11:27.311332 containerd[1585]: time="2026-03-04T01:11:27.311275119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 4 01:11:27.338173 containerd[1585]: time="2026-03-04T01:11:27.338124902Z" level=info msg="CreateContainer within sandbox \"5cf2e1492e6c78f11270ada5b06b9f18050152bbec3627741f22c4e89759f16a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 4 01:11:27.354720 containerd[1585]: time="2026-03-04T01:11:27.354645226Z" level=info msg="CreateContainer within sandbox \"5cf2e1492e6c78f11270ada5b06b9f18050152bbec3627741f22c4e89759f16a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4be8c773af016dcfcc93e4699237668fec49d631e6404562bdcae799ecf65e72\"" Mar 4 01:11:27.355530 containerd[1585]: time="2026-03-04T01:11:27.355483588Z" level=info msg="StartContainer for \"4be8c773af016dcfcc93e4699237668fec49d631e6404562bdcae799ecf65e72\"" Mar 4 01:11:27.460842 containerd[1585]: time="2026-03-04T01:11:27.460699531Z" level=info msg="StartContainer for \"4be8c773af016dcfcc93e4699237668fec49d631e6404562bdcae799ecf65e72\" returns successfully" Mar 4 01:11:27.636711 kubelet[2680]: E0304 01:11:27.633304 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rchx" podUID="d7d28883-d73e-4c89-9e3a-693929b745d0" Mar 4 01:11:27.862488 kubelet[2680]: E0304 01:11:27.862398 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:27.877055 kubelet[2680]: I0304 01:11:27.876802 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74f4996bf-9wvfh" podStartSLOduration=1.481772559 podStartE2EDuration="2.876780259s" podCreationTimestamp="2026-03-04 01:11:25 +0000 UTC" firstStartedPulling="2026-03-04 01:11:25.916031964 +0000 UTC m=+17.534363456" lastFinishedPulling="2026-03-04 01:11:27.311039684 +0000 UTC m=+18.929371156" observedRunningTime="2026-03-04 01:11:27.87659807 +0000 UTC m=+19.494929542" watchObservedRunningTime="2026-03-04 01:11:27.876780259 +0000 UTC m=+19.495111761" Mar 4 01:11:27.939405 kubelet[2680]: E0304 01:11:27.938910 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.939405 kubelet[2680]: W0304 01:11:27.938947 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.939405 kubelet[2680]: E0304 01:11:27.939048 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.939716 kubelet[2680]: E0304 01:11:27.939424 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.939716 kubelet[2680]: W0304 01:11:27.939435 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.939716 kubelet[2680]: E0304 01:11:27.939447 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.939839 kubelet[2680]: E0304 01:11:27.939787 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.939869 kubelet[2680]: W0304 01:11:27.939862 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.939895 kubelet[2680]: E0304 01:11:27.939873 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.940248 kubelet[2680]: E0304 01:11:27.940226 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.940248 kubelet[2680]: W0304 01:11:27.940247 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.940308 kubelet[2680]: E0304 01:11:27.940257 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.940704 kubelet[2680]: E0304 01:11:27.940686 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.940704 kubelet[2680]: W0304 01:11:27.940695 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.940704 kubelet[2680]: E0304 01:11:27.940703 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.941237 kubelet[2680]: E0304 01:11:27.941226 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.941237 kubelet[2680]: W0304 01:11:27.941236 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.941300 kubelet[2680]: E0304 01:11:27.941244 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.941519 kubelet[2680]: E0304 01:11:27.941497 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.941519 kubelet[2680]: W0304 01:11:27.941506 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.941519 kubelet[2680]: E0304 01:11:27.941514 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.941810 kubelet[2680]: E0304 01:11:27.941786 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.941810 kubelet[2680]: W0304 01:11:27.941809 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.941853 kubelet[2680]: E0304 01:11:27.941817 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.942254 kubelet[2680]: E0304 01:11:27.942209 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.942254 kubelet[2680]: W0304 01:11:27.942250 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.942403 kubelet[2680]: E0304 01:11:27.942306 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.942839 kubelet[2680]: E0304 01:11:27.942797 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.942993 kubelet[2680]: W0304 01:11:27.942875 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.942993 kubelet[2680]: E0304 01:11:27.942893 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.943474 kubelet[2680]: E0304 01:11:27.943441 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.943474 kubelet[2680]: W0304 01:11:27.943458 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.943474 kubelet[2680]: E0304 01:11:27.943470 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.944113 kubelet[2680]: E0304 01:11:27.944038 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.944113 kubelet[2680]: W0304 01:11:27.944067 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.944187 kubelet[2680]: E0304 01:11:27.944130 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.944418 kubelet[2680]: E0304 01:11:27.944384 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.944418 kubelet[2680]: W0304 01:11:27.944395 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.944418 kubelet[2680]: E0304 01:11:27.944404 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.944792 kubelet[2680]: E0304 01:11:27.944724 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.944792 kubelet[2680]: W0304 01:11:27.944745 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.944792 kubelet[2680]: E0304 01:11:27.944754 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.945061 kubelet[2680]: E0304 01:11:27.945030 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.945061 kubelet[2680]: W0304 01:11:27.945058 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.945155 kubelet[2680]: E0304 01:11:27.945069 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.958525 containerd[1585]: time="2026-03-04T01:11:27.958410435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:27.959428 containerd[1585]: time="2026-03-04T01:11:27.959366754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 4 01:11:27.960851 containerd[1585]: time="2026-03-04T01:11:27.960795810Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:27.963739 containerd[1585]: time="2026-03-04T01:11:27.963650534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:27.964509 containerd[1585]: time="2026-03-04T01:11:27.964437641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 653.129832ms" Mar 4 01:11:27.964509 containerd[1585]: time="2026-03-04T01:11:27.964475611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 4 01:11:27.969599 containerd[1585]: time="2026-03-04T01:11:27.969544401Z" level=info msg="CreateContainer within sandbox \"530b4cc6dcb8789c7b422b8c06299687860d2981a22ddafaf7ae15b1ff61a776\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 4 01:11:27.985739 kubelet[2680]: E0304 01:11:27.985605 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.985739 kubelet[2680]: W0304 01:11:27.985631 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.985739 kubelet[2680]: E0304 01:11:27.985652 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.986229 kubelet[2680]: E0304 01:11:27.986199 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.986229 kubelet[2680]: W0304 01:11:27.986222 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.986229 kubelet[2680]: E0304 01:11:27.986232 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.986631 kubelet[2680]: E0304 01:11:27.986580 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.986631 kubelet[2680]: W0304 01:11:27.986607 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.986631 kubelet[2680]: E0304 01:11:27.986615 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.987349 kubelet[2680]: E0304 01:11:27.987266 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.987349 kubelet[2680]: W0304 01:11:27.987309 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.987349 kubelet[2680]: E0304 01:11:27.987322 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.987712 containerd[1585]: time="2026-03-04T01:11:27.987669760Z" level=info msg="CreateContainer within sandbox \"530b4cc6dcb8789c7b422b8c06299687860d2981a22ddafaf7ae15b1ff61a776\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"49b49acee8eba15a2351bfc2bc301ef9eff4942e40a1cc69d40e3fb7a4085398\"" Mar 4 01:11:27.987758 kubelet[2680]: E0304 01:11:27.987706 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.987758 kubelet[2680]: W0304 01:11:27.987715 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.987758 kubelet[2680]: E0304 01:11:27.987724 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.988322 kubelet[2680]: E0304 01:11:27.988066 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.988322 kubelet[2680]: W0304 01:11:27.988150 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.988322 kubelet[2680]: E0304 01:11:27.988174 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.988537 containerd[1585]: time="2026-03-04T01:11:27.988427947Z" level=info msg="StartContainer for \"49b49acee8eba15a2351bfc2bc301ef9eff4942e40a1cc69d40e3fb7a4085398\"" Mar 4 01:11:27.988693 kubelet[2680]: E0304 01:11:27.988579 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.988693 kubelet[2680]: W0304 01:11:27.988590 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.988693 kubelet[2680]: E0304 01:11:27.988603 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.989401 kubelet[2680]: E0304 01:11:27.989343 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.989401 kubelet[2680]: W0304 01:11:27.989355 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.989401 kubelet[2680]: E0304 01:11:27.989368 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.989766 kubelet[2680]: E0304 01:11:27.989717 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.989766 kubelet[2680]: W0304 01:11:27.989730 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.989766 kubelet[2680]: E0304 01:11:27.989740 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.990244 kubelet[2680]: E0304 01:11:27.990066 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.990244 kubelet[2680]: W0304 01:11:27.990179 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.990244 kubelet[2680]: E0304 01:11:27.990193 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.991025 kubelet[2680]: E0304 01:11:27.990920 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.991025 kubelet[2680]: W0304 01:11:27.991021 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.991163 kubelet[2680]: E0304 01:11:27.991038 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.991914 kubelet[2680]: E0304 01:11:27.991781 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.991914 kubelet[2680]: W0304 01:11:27.991905 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.992034 kubelet[2680]: E0304 01:11:27.991920 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.992887 kubelet[2680]: E0304 01:11:27.992861 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.992887 kubelet[2680]: W0304 01:11:27.992885 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.992948 kubelet[2680]: E0304 01:11:27.992899 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.993313 kubelet[2680]: E0304 01:11:27.993284 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.993313 kubelet[2680]: W0304 01:11:27.993309 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.993451 kubelet[2680]: E0304 01:11:27.993319 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.993876 kubelet[2680]: E0304 01:11:27.993850 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.993876 kubelet[2680]: W0304 01:11:27.993874 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.993938 kubelet[2680]: E0304 01:11:27.993884 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.994360 kubelet[2680]: E0304 01:11:27.994334 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.994360 kubelet[2680]: W0304 01:11:27.994355 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.994427 kubelet[2680]: E0304 01:11:27.994365 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.994671 kubelet[2680]: E0304 01:11:27.994648 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.994671 kubelet[2680]: W0304 01:11:27.994667 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.994723 kubelet[2680]: E0304 01:11:27.994676 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:27.995147 kubelet[2680]: E0304 01:11:27.995062 2680 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:11:27.995147 kubelet[2680]: W0304 01:11:27.995128 2680 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:11:27.995147 kubelet[2680]: E0304 01:11:27.995138 2680 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:11:28.062238 containerd[1585]: time="2026-03-04T01:11:28.062188943Z" level=info msg="StartContainer for \"49b49acee8eba15a2351bfc2bc301ef9eff4942e40a1cc69d40e3fb7a4085398\" returns successfully" Mar 4 01:11:28.119493 containerd[1585]: time="2026-03-04T01:11:28.119422602Z" level=info msg="shim disconnected" id=49b49acee8eba15a2351bfc2bc301ef9eff4942e40a1cc69d40e3fb7a4085398 namespace=k8s.io Mar 4 01:11:28.119493 containerd[1585]: time="2026-03-04T01:11:28.119493053Z" level=warning msg="cleaning up after shim disconnected" id=49b49acee8eba15a2351bfc2bc301ef9eff4942e40a1cc69d40e3fb7a4085398 namespace=k8s.io Mar 4 01:11:28.119493 containerd[1585]: time="2026-03-04T01:11:28.119504014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:11:28.660582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49b49acee8eba15a2351bfc2bc301ef9eff4942e40a1cc69d40e3fb7a4085398-rootfs.mount: Deactivated successfully. Mar 4 01:11:28.866745 kubelet[2680]: E0304 01:11:28.866681 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:28.868500 containerd[1585]: time="2026-03-04T01:11:28.868456926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 4 01:11:29.624003 kubelet[2680]: E0304 01:11:29.623561 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rchx" podUID="d7d28883-d73e-4c89-9e3a-693929b745d0" Mar 4 01:11:29.881424 kubelet[2680]: E0304 01:11:29.881305 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:31.625352 kubelet[2680]: E0304 01:11:31.624299 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rchx" podUID="d7d28883-d73e-4c89-9e3a-693929b745d0" Mar 4 01:11:32.808164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435980999.mount: Deactivated successfully. Mar 4 01:11:32.996663 containerd[1585]: time="2026-03-04T01:11:32.996528836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:32.999290 containerd[1585]: time="2026-03-04T01:11:32.997398885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 4 01:11:33.016684 containerd[1585]: time="2026-03-04T01:11:33.016635217Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:33.086318 containerd[1585]: time="2026-03-04T01:11:33.086050507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:33.087049 containerd[1585]: time="2026-03-04T01:11:33.086926046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.21841008s" Mar 4 01:11:33.087049 containerd[1585]: time="2026-03-04T01:11:33.087009552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 4 01:11:33.103724 containerd[1585]: time="2026-03-04T01:11:33.103635903Z" level=info msg="CreateContainer within sandbox \"530b4cc6dcb8789c7b422b8c06299687860d2981a22ddafaf7ae15b1ff61a776\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 4 01:11:33.481460 containerd[1585]: time="2026-03-04T01:11:33.480274000Z" level=info msg="CreateContainer within sandbox \"530b4cc6dcb8789c7b422b8c06299687860d2981a22ddafaf7ae15b1ff61a776\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"e6a683a7b4c9efccec7af85d53d50464d57bbe230e0f0d8b4925685840a80f3c\"" Mar 4 01:11:33.487006 containerd[1585]: time="2026-03-04T01:11:33.486823943Z" level=info msg="StartContainer for \"e6a683a7b4c9efccec7af85d53d50464d57bbe230e0f0d8b4925685840a80f3c\"" Mar 4 01:11:33.638838 kubelet[2680]: E0304 01:11:33.636621 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rchx" podUID="d7d28883-d73e-4c89-9e3a-693929b745d0" Mar 4 01:11:34.056019 systemd[1]: run-containerd-runc-k8s.io-e6a683a7b4c9efccec7af85d53d50464d57bbe230e0f0d8b4925685840a80f3c-runc.hAPQb5.mount: Deactivated successfully. Mar 4 01:11:34.279614 containerd[1585]: time="2026-03-04T01:11:34.279463402Z" level=info msg="StartContainer for \"e6a683a7b4c9efccec7af85d53d50464d57bbe230e0f0d8b4925685840a80f3c\" returns successfully" Mar 4 01:11:34.727062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6a683a7b4c9efccec7af85d53d50464d57bbe230e0f0d8b4925685840a80f3c-rootfs.mount: Deactivated successfully. Mar 4 01:11:34.730912 containerd[1585]: time="2026-03-04T01:11:34.730859955Z" level=info msg="shim disconnected" id=e6a683a7b4c9efccec7af85d53d50464d57bbe230e0f0d8b4925685840a80f3c namespace=k8s.io Mar 4 01:11:34.731351 containerd[1585]: time="2026-03-04T01:11:34.731138173Z" level=warning msg="cleaning up after shim disconnected" id=e6a683a7b4c9efccec7af85d53d50464d57bbe230e0f0d8b4925685840a80f3c namespace=k8s.io Mar 4 01:11:34.731351 containerd[1585]: time="2026-03-04T01:11:34.731153742Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:11:34.984184 containerd[1585]: time="2026-03-04T01:11:34.982609233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 4 01:11:35.624625 kubelet[2680]: E0304 01:11:35.624443 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rchx" podUID="d7d28883-d73e-4c89-9e3a-693929b745d0" Mar 4 01:11:36.965713 containerd[1585]: time="2026-03-04T01:11:36.965627239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:36.966885 containerd[1585]: time="2026-03-04T01:11:36.966824184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 4 01:11:36.968482 containerd[1585]: time="2026-03-04T01:11:36.968430110Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:36.972010 containerd[1585]: time="2026-03-04T01:11:36.971921925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.989270984s" Mar 4 01:11:36.972010 containerd[1585]: time="2026-03-04T01:11:36.972002286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 4 01:11:36.978965 containerd[1585]: time="2026-03-04T01:11:36.978887977Z" level=info msg="CreateContainer within sandbox \"530b4cc6dcb8789c7b422b8c06299687860d2981a22ddafaf7ae15b1ff61a776\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 4 01:11:36.995192 containerd[1585]: time="2026-03-04T01:11:36.994807347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:37.006643 containerd[1585]: time="2026-03-04T01:11:37.006453816Z" level=info msg="CreateContainer within sandbox \"530b4cc6dcb8789c7b422b8c06299687860d2981a22ddafaf7ae15b1ff61a776\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8d5ab94655ddc76fce1df910c6283d87069fe282deb2ffa7d2cacfcbc06af2a3\"" Mar 4 01:11:37.007769 containerd[1585]: time="2026-03-04T01:11:37.007376684Z" level=info msg="StartContainer for \"8d5ab94655ddc76fce1df910c6283d87069fe282deb2ffa7d2cacfcbc06af2a3\"" Mar 4 01:11:37.043617 systemd[1]: run-containerd-runc-k8s.io-8d5ab94655ddc76fce1df910c6283d87069fe282deb2ffa7d2cacfcbc06af2a3-runc.YG3xfw.mount: Deactivated successfully. Mar 4 01:11:37.103933 containerd[1585]: time="2026-03-04T01:11:37.103862323Z" level=info msg="StartContainer for \"8d5ab94655ddc76fce1df910c6283d87069fe282deb2ffa7d2cacfcbc06af2a3\" returns successfully" Mar 4 01:11:37.626913 kubelet[2680]: E0304 01:11:37.623726 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rchx" podUID="d7d28883-d73e-4c89-9e3a-693929b745d0" Mar 4 01:11:37.912622 containerd[1585]: time="2026-03-04T01:11:37.912504576Z" level=info msg="shim disconnected" id=8d5ab94655ddc76fce1df910c6283d87069fe282deb2ffa7d2cacfcbc06af2a3 namespace=k8s.io Mar 4 01:11:37.912622 containerd[1585]: time="2026-03-04T01:11:37.912562804Z" level=warning msg="cleaning up after shim disconnected" id=8d5ab94655ddc76fce1df910c6283d87069fe282deb2ffa7d2cacfcbc06af2a3 namespace=k8s.io Mar 4 01:11:37.912622 containerd[1585]: time="2026-03-04T01:11:37.912572543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:11:37.912838 kubelet[2680]: I0304 01:11:37.912681 2680 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 4 01:11:38.003828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d5ab94655ddc76fce1df910c6283d87069fe282deb2ffa7d2cacfcbc06af2a3-rootfs.mount: Deactivated successfully. Mar 4 01:11:38.024773 kubelet[2680]: I0304 01:11:38.021700 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5a645f6-4149-4d70-9617-5ae452f094ab-whisker-ca-bundle\") pod \"whisker-64c8545f95-vl8pd\" (UID: \"c5a645f6-4149-4d70-9617-5ae452f094ab\") " pod="calico-system/whisker-64c8545f95-vl8pd" Mar 4 01:11:38.024773 kubelet[2680]: I0304 01:11:38.021782 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dwx2\" (UniqueName: \"kubernetes.io/projected/f044584a-0762-48e5-86e5-bdd14b596aa5-kube-api-access-4dwx2\") pod \"coredns-674b8bbfcf-lxbhh\" (UID: \"f044584a-0762-48e5-86e5-bdd14b596aa5\") " pod="kube-system/coredns-674b8bbfcf-lxbhh" Mar 4 01:11:38.024773 kubelet[2680]: I0304 01:11:38.021813 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f044584a-0762-48e5-86e5-bdd14b596aa5-config-volume\") pod \"coredns-674b8bbfcf-lxbhh\" (UID: \"f044584a-0762-48e5-86e5-bdd14b596aa5\") " pod="kube-system/coredns-674b8bbfcf-lxbhh" Mar 4 01:11:38.024773 kubelet[2680]: I0304 01:11:38.021843 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh9kj\" (UniqueName: \"kubernetes.io/projected/02ad6776-d910-4946-a34e-593710285244-kube-api-access-mh9kj\") pod \"calico-kube-controllers-5645dcc89f-zmqfq\" (UID: \"02ad6776-d910-4946-a34e-593710285244\") " pod="calico-system/calico-kube-controllers-5645dcc89f-zmqfq" Mar 4 01:11:38.024773 kubelet[2680]: I0304 01:11:38.021873 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fzx6\" (UniqueName: \"kubernetes.io/projected/c5a645f6-4149-4d70-9617-5ae452f094ab-kube-api-access-2fzx6\") pod \"whisker-64c8545f95-vl8pd\" (UID: \"c5a645f6-4149-4d70-9617-5ae452f094ab\") " pod="calico-system/whisker-64c8545f95-vl8pd" Mar 4 01:11:38.025206 kubelet[2680]: I0304 01:11:38.021905 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c5a645f6-4149-4d70-9617-5ae452f094ab-nginx-config\") pod \"whisker-64c8545f95-vl8pd\" (UID: \"c5a645f6-4149-4d70-9617-5ae452f094ab\") " pod="calico-system/whisker-64c8545f95-vl8pd" Mar 4 01:11:38.025206 kubelet[2680]: I0304 01:11:38.021933 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02ad6776-d910-4946-a34e-593710285244-tigera-ca-bundle\") pod \"calico-kube-controllers-5645dcc89f-zmqfq\" (UID: \"02ad6776-d910-4946-a34e-593710285244\") " pod="calico-system/calico-kube-controllers-5645dcc89f-zmqfq" Mar 4 01:11:38.025206 kubelet[2680]: I0304 01:11:38.021961 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5a645f6-4149-4d70-9617-5ae452f094ab-whisker-backend-key-pair\") pod \"whisker-64c8545f95-vl8pd\" (UID: \"c5a645f6-4149-4d70-9617-5ae452f094ab\") " pod="calico-system/whisker-64c8545f95-vl8pd" Mar 4 01:11:38.025206 kubelet[2680]: I0304 01:11:38.022032 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh7w4\" (UniqueName: \"kubernetes.io/projected/c8183cfe-8bfc-4067-aea8-5244cdfa9694-kube-api-access-sh7w4\") pod \"coredns-674b8bbfcf-7998p\" (UID: \"c8183cfe-8bfc-4067-aea8-5244cdfa9694\") " pod="kube-system/coredns-674b8bbfcf-7998p" Mar 4 01:11:38.025206 kubelet[2680]: I0304 01:11:38.022059 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8183cfe-8bfc-4067-aea8-5244cdfa9694-config-volume\") pod \"coredns-674b8bbfcf-7998p\" (UID: \"c8183cfe-8bfc-4067-aea8-5244cdfa9694\") " pod="kube-system/coredns-674b8bbfcf-7998p" Mar 4 01:11:38.083807 containerd[1585]: time="2026-03-04T01:11:38.083374165Z" level=info msg="CreateContainer within sandbox \"530b4cc6dcb8789c7b422b8c06299687860d2981a22ddafaf7ae15b1ff61a776\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 4 01:11:38.111038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount823292094.mount: Deactivated successfully. Mar 4 01:11:38.115202 containerd[1585]: time="2026-03-04T01:11:38.115061980Z" level=info msg="CreateContainer within sandbox \"530b4cc6dcb8789c7b422b8c06299687860d2981a22ddafaf7ae15b1ff61a776\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"20620ce1bd7236ba346286dadbc273ed5a5da24838e218d25389874d2ef7b239\"" Mar 4 01:11:38.116947 containerd[1585]: time="2026-03-04T01:11:38.116905631Z" level=info msg="StartContainer for \"20620ce1bd7236ba346286dadbc273ed5a5da24838e218d25389874d2ef7b239\"" Mar 4 01:11:38.124782 kubelet[2680]: I0304 01:11:38.124196 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n27p9\" (UniqueName: \"kubernetes.io/projected/2af07391-9b64-40af-835a-1d50f76831e1-kube-api-access-n27p9\") pod \"calico-apiserver-657b845487-f9ph9\" (UID: \"2af07391-9b64-40af-835a-1d50f76831e1\") " pod="calico-system/calico-apiserver-657b845487-f9ph9" Mar 4 01:11:38.125169 kubelet[2680]: I0304 01:11:38.125049 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2af07391-9b64-40af-835a-1d50f76831e1-calico-apiserver-certs\") pod \"calico-apiserver-657b845487-f9ph9\" (UID: \"2af07391-9b64-40af-835a-1d50f76831e1\") " pod="calico-system/calico-apiserver-657b845487-f9ph9" Mar 4 01:11:38.126443 kubelet[2680]: I0304 01:11:38.126347 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/03c35491-5bc5-4312-a5b8-da3b8fa8bdbc-calico-apiserver-certs\") pod \"calico-apiserver-657b845487-rwtgg\" (UID: \"03c35491-5bc5-4312-a5b8-da3b8fa8bdbc\") " pod="calico-system/calico-apiserver-657b845487-rwtgg" Mar 4 01:11:38.129661 kubelet[2680]: I0304 01:11:38.129586 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6xbs\" (UniqueName: \"kubernetes.io/projected/03c35491-5bc5-4312-a5b8-da3b8fa8bdbc-kube-api-access-m6xbs\") pod \"calico-apiserver-657b845487-rwtgg\" (UID: \"03c35491-5bc5-4312-a5b8-da3b8fa8bdbc\") " pod="calico-system/calico-apiserver-657b845487-rwtgg" Mar 4 01:11:38.129757 kubelet[2680]: I0304 01:11:38.129667 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f48d5d14-c523-4a8c-a5fc-905a4ef84d9b-goldmane-key-pair\") pod \"goldmane-5b85766d88-nm8vg\" (UID: \"f48d5d14-c523-4a8c-a5fc-905a4ef84d9b\") " pod="calico-system/goldmane-5b85766d88-nm8vg" Mar 4 01:11:38.131626 kubelet[2680]: I0304 01:11:38.131219 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f48d5d14-c523-4a8c-a5fc-905a4ef84d9b-config\") pod \"goldmane-5b85766d88-nm8vg\" (UID: \"f48d5d14-c523-4a8c-a5fc-905a4ef84d9b\") " pod="calico-system/goldmane-5b85766d88-nm8vg" Mar 4 01:11:38.131626 kubelet[2680]: I0304 01:11:38.131261 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f48d5d14-c523-4a8c-a5fc-905a4ef84d9b-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-nm8vg\" (UID: \"f48d5d14-c523-4a8c-a5fc-905a4ef84d9b\") " pod="calico-system/goldmane-5b85766d88-nm8vg" Mar 4 01:11:38.232733 kubelet[2680]: I0304 01:11:38.232539 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9wwn\" (UniqueName: \"kubernetes.io/projected/f48d5d14-c523-4a8c-a5fc-905a4ef84d9b-kube-api-access-d9wwn\") pod \"goldmane-5b85766d88-nm8vg\" (UID: \"f48d5d14-c523-4a8c-a5fc-905a4ef84d9b\") " pod="calico-system/goldmane-5b85766d88-nm8vg" Mar 4 01:11:38.262938 containerd[1585]: time="2026-03-04T01:11:38.262800063Z" level=info msg="StartContainer for \"20620ce1bd7236ba346286dadbc273ed5a5da24838e218d25389874d2ef7b239\" returns successfully" Mar 4 01:11:38.286377 kubelet[2680]: E0304 01:11:38.286315 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:38.288825 containerd[1585]: time="2026-03-04T01:11:38.288717307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lxbhh,Uid:f044584a-0762-48e5-86e5-bdd14b596aa5,Namespace:kube-system,Attempt:0,}" Mar 4 01:11:38.300169 containerd[1585]: time="2026-03-04T01:11:38.300041722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5645dcc89f-zmqfq,Uid:02ad6776-d910-4946-a34e-593710285244,Namespace:calico-system,Attempt:0,}" Mar 4 01:11:38.307008 kubelet[2680]: E0304 01:11:38.306844 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:38.308147 containerd[1585]: time="2026-03-04T01:11:38.307946403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7998p,Uid:c8183cfe-8bfc-4067-aea8-5244cdfa9694,Namespace:kube-system,Attempt:0,}" Mar 4 01:11:38.317277 containerd[1585]: time="2026-03-04T01:11:38.317209033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64c8545f95-vl8pd,Uid:c5a645f6-4149-4d70-9617-5ae452f094ab,Namespace:calico-system,Attempt:0,}" Mar 4 01:11:38.327894 containerd[1585]: time="2026-03-04T01:11:38.327824698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657b845487-f9ph9,Uid:2af07391-9b64-40af-835a-1d50f76831e1,Namespace:calico-system,Attempt:0,}" Mar 4 01:11:38.336596 containerd[1585]: time="2026-03-04T01:11:38.336506816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657b845487-rwtgg,Uid:03c35491-5bc5-4312-a5b8-da3b8fa8bdbc,Namespace:calico-system,Attempt:0,}" Mar 4 01:11:38.374957 containerd[1585]: time="2026-03-04T01:11:38.374605327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nm8vg,Uid:f48d5d14-c523-4a8c-a5fc-905a4ef84d9b,Namespace:calico-system,Attempt:0,}" Mar 4 01:11:38.655019 containerd[1585]: time="2026-03-04T01:11:38.654750178Z" level=error msg="Failed to destroy network for sandbox \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:38.656489 containerd[1585]: time="2026-03-04T01:11:38.655768334Z" level=error msg="encountered an error cleaning up failed sandbox \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:38.656489 containerd[1585]: time="2026-03-04T01:11:38.655885092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657b845487-rwtgg,Uid:03c35491-5bc5-4312-a5b8-da3b8fa8bdbc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:38.677662 containerd[1585]: time="2026-03-04T01:11:38.677564716Z" level=error msg="Failed to destroy network for sandbox \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:38.679274 containerd[1585]: time="2026-03-04T01:11:38.679247871Z" level=error msg="encountered an error cleaning up failed sandbox \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:38.679470 containerd[1585]: time="2026-03-04T01:11:38.679446780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657b845487-f9ph9,Uid:2af07391-9b64-40af-835a-1d50f76831e1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:38.705612 containerd[1585]: time="2026-03-04T01:11:38.705559810Z" level=error msg="Failed to destroy network for sandbox \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:38.707050 containerd[1585]: time="2026-03-04T01:11:38.707023517Z" level=error msg="encountered an error cleaning up failed sandbox \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:38.707260 containerd[1585]: time="2026-03-04T01:11:38.707236132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64c8545f95-vl8pd,Uid:c5a645f6-4149-4d70-9617-5ae452f094ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:38.711164 kubelet[2680]: E0304 01:11:38.711061 2680 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:38.713500 kubelet[2680]: E0304 01:11:38.711212 2680 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64c8545f95-vl8pd" Mar 4 01:11:38.713500 kubelet[2680]: E0304 01:11:38.711252 2680 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64c8545f95-vl8pd" Mar 4 01:11:38.713500 kubelet[2680]: E0304 01:11:38.711319 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64c8545f95-vl8pd_calico-system(c5a645f6-4149-4d70-9617-5ae452f094ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64c8545f95-vl8pd_calico-system(c5a645f6-4149-4d70-9617-5ae452f094ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64c8545f95-vl8pd" podUID="c5a645f6-4149-4d70-9617-5ae452f094ab" Mar 4 01:11:38.713733 kubelet[2680]: E0304 01:11:38.711626 2680 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:38.713733 kubelet[2680]: E0304 01:11:38.711667 2680 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-657b845487-rwtgg" Mar 4 01:11:38.713733 kubelet[2680]: E0304 01:11:38.711698 2680 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-657b845487-rwtgg" Mar 4 01:11:38.713855 kubelet[2680]: E0304 01:11:38.711749 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-657b845487-rwtgg_calico-system(03c35491-5bc5-4312-a5b8-da3b8fa8bdbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-657b845487-rwtgg_calico-system(03c35491-5bc5-4312-a5b8-da3b8fa8bdbc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-657b845487-rwtgg" podUID="03c35491-5bc5-4312-a5b8-da3b8fa8bdbc" Mar 4 01:11:38.713855 kubelet[2680]: E0304 01:11:38.711807 2680 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:38.713855 kubelet[2680]: E0304 01:11:38.711835 2680 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-657b845487-f9ph9" Mar 4 01:11:38.714166 kubelet[2680]: E0304 01:11:38.711855 2680 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-657b845487-f9ph9" Mar 4 01:11:38.714166 kubelet[2680]: E0304 01:11:38.711913 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-657b845487-f9ph9_calico-system(2af07391-9b64-40af-835a-1d50f76831e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-657b845487-f9ph9_calico-system(2af07391-9b64-40af-835a-1d50f76831e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-657b845487-f9ph9" podUID="2af07391-9b64-40af-835a-1d50f76831e1" Mar 4 01:11:38.980558 containerd[1585]: 2026-03-04 01:11:38.854 [INFO][3808] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7" Mar 4 01:11:38.980558 containerd[1585]: 2026-03-04 01:11:38.857 [INFO][3808] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7" iface="eth0" netns="/var/run/netns/cni-87d18c9b-ed2c-5c5c-2421-3092005e16e4" Mar 4 01:11:38.980558 containerd[1585]: 2026-03-04 01:11:38.857 [INFO][3808] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7" iface="eth0" netns="/var/run/netns/cni-87d18c9b-ed2c-5c5c-2421-3092005e16e4" Mar 4 01:11:38.980558 containerd[1585]: 2026-03-04 01:11:38.857 [INFO][3808] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7" iface="eth0" netns="/var/run/netns/cni-87d18c9b-ed2c-5c5c-2421-3092005e16e4" Mar 4 01:11:38.980558 containerd[1585]: 2026-03-04 01:11:38.857 [INFO][3808] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7" Mar 4 01:11:38.980558 containerd[1585]: 2026-03-04 01:11:38.857 [INFO][3808] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7" Mar 4 01:11:38.980558 containerd[1585]: 2026-03-04 01:11:38.889 [INFO][3847] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7" HandleID="k8s-pod-network.94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7" Workload="localhost-k8s-goldmane--5b85766d88--nm8vg-eth0" Mar 4 01:11:38.980558 containerd[1585]: 2026-03-04 01:11:38.889 [INFO][3847] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:11:38.980558 containerd[1585]: 2026-03-04 01:11:38.889 [INFO][3847] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:11:38.980558 containerd[1585]: 2026-03-04 01:11:38.904 [WARNING][3847] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7" HandleID="k8s-pod-network.94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7" Workload="localhost-k8s-goldmane--5b85766d88--nm8vg-eth0" Mar 4 01:11:38.980558 containerd[1585]: 2026-03-04 01:11:38.904 [INFO][3847] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7" HandleID="k8s-pod-network.94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7" Workload="localhost-k8s-goldmane--5b85766d88--nm8vg-eth0" Mar 4 01:11:38.980558 containerd[1585]: 2026-03-04 01:11:38.916 [INFO][3847] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:11:38.980558 containerd[1585]: 2026-03-04 01:11:38.958 [INFO][3808] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7" Mar 4 01:11:38.989153 containerd[1585]: 2026-03-04 01:11:38.883 [INFO][3781] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01" Mar 4 01:11:38.989153 containerd[1585]: 2026-03-04 01:11:38.884 [INFO][3781] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01" iface="eth0" netns="/var/run/netns/cni-76e999d4-e7cf-4a7d-5002-29acdf7b1e89" Mar 4 01:11:38.989153 containerd[1585]: 2026-03-04 01:11:38.884 [INFO][3781] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01" iface="eth0" netns="/var/run/netns/cni-76e999d4-e7cf-4a7d-5002-29acdf7b1e89" Mar 4 01:11:38.989153 containerd[1585]: 2026-03-04 01:11:38.885 [INFO][3781] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01" iface="eth0" netns="/var/run/netns/cni-76e999d4-e7cf-4a7d-5002-29acdf7b1e89" Mar 4 01:11:38.989153 containerd[1585]: 2026-03-04 01:11:38.886 [INFO][3781] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01" Mar 4 01:11:38.989153 containerd[1585]: 2026-03-04 01:11:38.886 [INFO][3781] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01" Mar 4 01:11:38.989153 containerd[1585]: 2026-03-04 01:11:38.959 [INFO][3856] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01" HandleID="k8s-pod-network.2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01" Workload="localhost-k8s-coredns--674b8bbfcf--7998p-eth0" Mar 4 01:11:38.989153 containerd[1585]: 2026-03-04 01:11:38.959 [INFO][3856] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:11:38.989153 containerd[1585]: 2026-03-04 01:11:38.960 [INFO][3856] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:11:38.989153 containerd[1585]: 2026-03-04 01:11:38.973 [WARNING][3856] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01" HandleID="k8s-pod-network.2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01" Workload="localhost-k8s-coredns--674b8bbfcf--7998p-eth0" Mar 4 01:11:38.989153 containerd[1585]: 2026-03-04 01:11:38.974 [INFO][3856] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01" HandleID="k8s-pod-network.2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01" Workload="localhost-k8s-coredns--674b8bbfcf--7998p-eth0" Mar 4 01:11:38.989153 containerd[1585]: 2026-03-04 01:11:38.976 [INFO][3856] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:11:38.989153 containerd[1585]: 2026-03-04 01:11:38.982 [INFO][3781] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01" Mar 4 01:11:39.010041 containerd[1585]: time="2026-03-04T01:11:39.009931200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7998p,Uid:c8183cfe-8bfc-4067-aea8-5244cdfa9694,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:39.012469 kubelet[2680]: E0304 01:11:39.012409 2680 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:39.012796 kubelet[2680]: E0304 01:11:39.012762 2680 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7998p" Mar 4 01:11:39.013247 kubelet[2680]: E0304 01:11:39.013216 2680 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7998p" Mar 4 01:11:39.013419 kubelet[2680]: E0304 01:11:39.013383 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7998p_kube-system(c8183cfe-8bfc-4067-aea8-5244cdfa9694)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7998p_kube-system(c8183cfe-8bfc-4067-aea8-5244cdfa9694)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7998p" podUID="c8183cfe-8bfc-4067-aea8-5244cdfa9694" Mar 4 01:11:39.014141 containerd[1585]: time="2026-03-04T01:11:39.013966922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nm8vg,Uid:f48d5d14-c523-4a8c-a5fc-905a4ef84d9b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:39.016556 kubelet[2680]: E0304 01:11:39.016259 2680 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:39.017039 kubelet[2680]: E0304 01:11:39.016884 2680 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-nm8vg" Mar 4 01:11:39.017374 kubelet[2680]: E0304 01:11:39.017284 2680 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-nm8vg" Mar 4 01:11:39.021642 kubelet[2680]: E0304 01:11:39.019338 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-nm8vg_calico-system(f48d5d14-c523-4a8c-a5fc-905a4ef84d9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-nm8vg_calico-system(f48d5d14-c523-4a8c-a5fc-905a4ef84d9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94f1bbc452624ab6c12f874ffce8e7e25bfd3bacab8a583c795ebf9c412b48b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-nm8vg" podUID="f48d5d14-c523-4a8c-a5fc-905a4ef84d9b" Mar 4 01:11:39.039625 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2afc8fdd041e7f58d2c0d95937c935ff7ae741f4c3516e2796439e7a90bc7d01-shm.mount: Deactivated successfully. Mar 4 01:11:39.071157 kubelet[2680]: I0304 01:11:39.070773 2680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Mar 4 01:11:39.090320 containerd[1585]: 2026-03-04 01:11:38.835 [INFO][3777] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5" Mar 4 01:11:39.090320 containerd[1585]: 2026-03-04 01:11:38.836 [INFO][3777] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5" iface="eth0" netns="/var/run/netns/cni-008945b8-b1fb-606d-f416-cef9ed69091d" Mar 4 01:11:39.090320 containerd[1585]: 2026-03-04 01:11:38.836 [INFO][3777] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5" iface="eth0" netns="/var/run/netns/cni-008945b8-b1fb-606d-f416-cef9ed69091d" Mar 4 01:11:39.090320 containerd[1585]: 2026-03-04 01:11:38.840 [INFO][3777] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5" iface="eth0" netns="/var/run/netns/cni-008945b8-b1fb-606d-f416-cef9ed69091d" Mar 4 01:11:39.090320 containerd[1585]: 2026-03-04 01:11:38.840 [INFO][3777] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5" Mar 4 01:11:39.090320 containerd[1585]: 2026-03-04 01:11:38.840 [INFO][3777] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5" Mar 4 01:11:39.090320 containerd[1585]: 2026-03-04 01:11:38.967 [INFO][3835] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5" HandleID="k8s-pod-network.22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5" Workload="localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0" Mar 4 01:11:39.090320 containerd[1585]: 2026-03-04 01:11:38.991 [INFO][3835] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:11:39.090320 containerd[1585]: 2026-03-04 01:11:38.991 [INFO][3835] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:11:39.090320 containerd[1585]: 2026-03-04 01:11:39.029 [WARNING][3835] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5" HandleID="k8s-pod-network.22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5" Workload="localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0" Mar 4 01:11:39.090320 containerd[1585]: 2026-03-04 01:11:39.029 [INFO][3835] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5" HandleID="k8s-pod-network.22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5" Workload="localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0" Mar 4 01:11:39.090320 containerd[1585]: 2026-03-04 01:11:39.033 [INFO][3835] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:11:39.090320 containerd[1585]: 2026-03-04 01:11:39.044 [INFO][3777] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5" Mar 4 01:11:39.099523 kubelet[2680]: I0304 01:11:39.088347 2680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Mar 4 01:11:39.097038 systemd[1]: run-netns-cni\x2d008945b8\x2db1fb\x2d606d\x2df416\x2dcef9ed69091d.mount: Deactivated successfully. Mar 4 01:11:39.102744 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5-shm.mount: Deactivated successfully. Mar 4 01:11:39.107209 containerd[1585]: time="2026-03-04T01:11:39.106867332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5645dcc89f-zmqfq,Uid:02ad6776-d910-4946-a34e-593710285244,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:39.108349 kubelet[2680]: E0304 01:11:39.108219 2680 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:39.108569 kubelet[2680]: E0304 01:11:39.108505 2680 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5645dcc89f-zmqfq" Mar 4 01:11:39.108745 kubelet[2680]: E0304 01:11:39.108672 2680 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5645dcc89f-zmqfq" Mar 4 01:11:39.109829 kubelet[2680]: E0304 01:11:39.109799 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5645dcc89f-zmqfq_calico-system(02ad6776-d910-4946-a34e-593710285244)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5645dcc89f-zmqfq_calico-system(02ad6776-d910-4946-a34e-593710285244)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22a33d44e2420e50ba279c864272948c3f14cef20ff702cac76a33cc43c4b7e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5645dcc89f-zmqfq" podUID="02ad6776-d910-4946-a34e-593710285244" Mar 4 01:11:39.116605 containerd[1585]: 2026-03-04 01:11:38.834 [INFO][3783] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180" Mar 4 01:11:39.116605 containerd[1585]: 2026-03-04 01:11:38.836 [INFO][3783] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180" iface="eth0" netns="/var/run/netns/cni-f24fcfd0-f846-fbe0-dfb4-3c05acfa06be" Mar 4 01:11:39.116605 containerd[1585]: 2026-03-04 01:11:38.836 [INFO][3783] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180" iface="eth0" netns="/var/run/netns/cni-f24fcfd0-f846-fbe0-dfb4-3c05acfa06be" Mar 4 01:11:39.116605 containerd[1585]: 2026-03-04 01:11:38.838 [INFO][3783] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180" iface="eth0" netns="/var/run/netns/cni-f24fcfd0-f846-fbe0-dfb4-3c05acfa06be" Mar 4 01:11:39.116605 containerd[1585]: 2026-03-04 01:11:38.838 [INFO][3783] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180" Mar 4 01:11:39.116605 containerd[1585]: 2026-03-04 01:11:38.839 [INFO][3783] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180" Mar 4 01:11:39.116605 containerd[1585]: 2026-03-04 01:11:39.012 [INFO][3831] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180" HandleID="k8s-pod-network.2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180" Workload="localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0" Mar 4 01:11:39.116605 containerd[1585]: 2026-03-04 01:11:39.025 [INFO][3831] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:11:39.116605 containerd[1585]: 2026-03-04 01:11:39.036 [INFO][3831] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:11:39.116605 containerd[1585]: 2026-03-04 01:11:39.073 [WARNING][3831] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180" HandleID="k8s-pod-network.2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180" Workload="localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0" Mar 4 01:11:39.116605 containerd[1585]: 2026-03-04 01:11:39.074 [INFO][3831] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180" HandleID="k8s-pod-network.2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180" Workload="localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0" Mar 4 01:11:39.116605 containerd[1585]: 2026-03-04 01:11:39.079 [INFO][3831] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:11:39.116605 containerd[1585]: 2026-03-04 01:11:39.102 [INFO][3783] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180" Mar 4 01:11:39.123151 systemd[1]: run-netns-cni\x2df24fcfd0\x2df846\x2dfbe0\x2ddfb4\x2d3c05acfa06be.mount: Deactivated successfully. Mar 4 01:11:39.131470 containerd[1585]: time="2026-03-04T01:11:39.130759875Z" level=info msg="StopPodSandbox for \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\"" Mar 4 01:11:39.134171 containerd[1585]: time="2026-03-04T01:11:39.133265061Z" level=info msg="StopPodSandbox for \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\"" Mar 4 01:11:39.134171 containerd[1585]: time="2026-03-04T01:11:39.134152133Z" level=info msg="Ensure that sandbox 763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5 in task-service has been cleanup successfully" Mar 4 01:11:39.134248 containerd[1585]: time="2026-03-04T01:11:39.134169907Z" level=info msg="Ensure that sandbox 97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c in task-service has been cleanup successfully" Mar 4 01:11:39.134922 containerd[1585]: time="2026-03-04T01:11:39.134894144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lxbhh,Uid:f044584a-0762-48e5-86e5-bdd14b596aa5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:39.135843 kubelet[2680]: E0304 01:11:39.135816 2680 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:11:39.135951 kubelet[2680]: E0304 01:11:39.135934 2680 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lxbhh" Mar 4 01:11:39.136058 kubelet[2680]: E0304 01:11:39.136042 2680 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lxbhh" Mar 4 01:11:39.136242 kubelet[2680]: E0304 01:11:39.136219 2680 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lxbhh_kube-system(f044584a-0762-48e5-86e5-bdd14b596aa5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lxbhh_kube-system(f044584a-0762-48e5-86e5-bdd14b596aa5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lxbhh" podUID="f044584a-0762-48e5-86e5-bdd14b596aa5" Mar 4 01:11:39.147738 kubelet[2680]: I0304 01:11:39.146708 2680 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Mar 4 01:11:39.154765 kubelet[2680]: E0304 01:11:39.154637 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:39.160632 kubelet[2680]: I0304 01:11:39.160240 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kkd5t" podStartSLOduration=3.161983581 podStartE2EDuration="14.160225265s" podCreationTimestamp="2026-03-04 01:11:25 +0000 UTC" firstStartedPulling="2026-03-04 01:11:25.97453337 +0000 UTC m=+17.592864842" lastFinishedPulling="2026-03-04 01:11:36.972775054 +0000 UTC m=+28.591106526" observedRunningTime="2026-03-04 01:11:39.159734091 +0000 UTC m=+30.778065563" watchObservedRunningTime="2026-03-04 01:11:39.160225265 +0000 UTC m=+30.778556738" Mar 4 01:11:39.163032 containerd[1585]: time="2026-03-04T01:11:39.162926826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7998p,Uid:c8183cfe-8bfc-4067-aea8-5244cdfa9694,Namespace:kube-system,Attempt:0,}" Mar 4 01:11:39.163170 containerd[1585]: time="2026-03-04T01:11:39.163070646Z" level=info msg="StopPodSandbox for \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\"" Mar 4 01:11:39.163379 containerd[1585]: time="2026-03-04T01:11:39.163328496Z" level=info msg="Ensure that sandbox 139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680 in task-service has been cleanup successfully" Mar 4 01:11:39.163600 containerd[1585]: time="2026-03-04T01:11:39.162961463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nm8vg,Uid:f48d5d14-c523-4a8c-a5fc-905a4ef84d9b,Namespace:calico-system,Attempt:0,}" Mar 4 01:11:39.376971 containerd[1585]: 2026-03-04 01:11:39.241 [INFO][3904] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Mar 4 01:11:39.376971 containerd[1585]: 2026-03-04 01:11:39.242 [INFO][3904] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" iface="eth0" netns="/var/run/netns/cni-07993e0d-776f-8219-4302-32a3f77e712d" Mar 4 01:11:39.376971 containerd[1585]: 2026-03-04 01:11:39.243 [INFO][3904] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" iface="eth0" netns="/var/run/netns/cni-07993e0d-776f-8219-4302-32a3f77e712d" Mar 4 01:11:39.376971 containerd[1585]: 2026-03-04 01:11:39.245 [INFO][3904] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" iface="eth0" netns="/var/run/netns/cni-07993e0d-776f-8219-4302-32a3f77e712d" Mar 4 01:11:39.376971 containerd[1585]: 2026-03-04 01:11:39.245 [INFO][3904] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Mar 4 01:11:39.376971 containerd[1585]: 2026-03-04 01:11:39.245 [INFO][3904] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Mar 4 01:11:39.376971 containerd[1585]: 2026-03-04 01:11:39.340 [INFO][3976] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" HandleID="k8s-pod-network.763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Workload="localhost-k8s-whisker--64c8545f95--vl8pd-eth0" Mar 4 01:11:39.376971 containerd[1585]: 2026-03-04 01:11:39.345 [INFO][3976] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:11:39.376971 containerd[1585]: 2026-03-04 01:11:39.345 [INFO][3976] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:11:39.376971 containerd[1585]: 2026-03-04 01:11:39.360 [WARNING][3976] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" HandleID="k8s-pod-network.763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Workload="localhost-k8s-whisker--64c8545f95--vl8pd-eth0" Mar 4 01:11:39.376971 containerd[1585]: 2026-03-04 01:11:39.360 [INFO][3976] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" HandleID="k8s-pod-network.763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Workload="localhost-k8s-whisker--64c8545f95--vl8pd-eth0" Mar 4 01:11:39.376971 containerd[1585]: 2026-03-04 01:11:39.362 [INFO][3976] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:11:39.376971 containerd[1585]: 2026-03-04 01:11:39.372 [INFO][3904] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Mar 4 01:11:39.378361 containerd[1585]: time="2026-03-04T01:11:39.377634815Z" level=info msg="TearDown network for sandbox \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\" successfully" Mar 4 01:11:39.378361 containerd[1585]: time="2026-03-04T01:11:39.377669670Z" level=info msg="StopPodSandbox for \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\" returns successfully" Mar 4 01:11:39.452151 kubelet[2680]: I0304 01:11:39.451924 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fzx6\" (UniqueName: \"kubernetes.io/projected/c5a645f6-4149-4d70-9617-5ae452f094ab-kube-api-access-2fzx6\") pod \"c5a645f6-4149-4d70-9617-5ae452f094ab\" (UID: \"c5a645f6-4149-4d70-9617-5ae452f094ab\") " Mar 4 01:11:39.452151 kubelet[2680]: I0304 01:11:39.452068 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5a645f6-4149-4d70-9617-5ae452f094ab-whisker-ca-bundle\") pod \"c5a645f6-4149-4d70-9617-5ae452f094ab\" (UID: \"c5a645f6-4149-4d70-9617-5ae452f094ab\") " Mar 4 01:11:39.452546 kubelet[2680]: I0304 01:11:39.452217 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c5a645f6-4149-4d70-9617-5ae452f094ab-nginx-config\") pod \"c5a645f6-4149-4d70-9617-5ae452f094ab\" (UID: \"c5a645f6-4149-4d70-9617-5ae452f094ab\") " Mar 4 01:11:39.452546 kubelet[2680]: I0304 01:11:39.452245 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5a645f6-4149-4d70-9617-5ae452f094ab-whisker-backend-key-pair\") pod \"c5a645f6-4149-4d70-9617-5ae452f094ab\" (UID: \"c5a645f6-4149-4d70-9617-5ae452f094ab\") " Mar 4 01:11:39.453584 kubelet[2680]: I0304 01:11:39.453467 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a645f6-4149-4d70-9617-5ae452f094ab-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "c5a645f6-4149-4d70-9617-5ae452f094ab" (UID: "c5a645f6-4149-4d70-9617-5ae452f094ab"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:11:39.453819 kubelet[2680]: I0304 01:11:39.453714 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a645f6-4149-4d70-9617-5ae452f094ab-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c5a645f6-4149-4d70-9617-5ae452f094ab" (UID: "c5a645f6-4149-4d70-9617-5ae452f094ab"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:11:39.458852 kubelet[2680]: I0304 01:11:39.458743 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a645f6-4149-4d70-9617-5ae452f094ab-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c5a645f6-4149-4d70-9617-5ae452f094ab" (UID: "c5a645f6-4149-4d70-9617-5ae452f094ab"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 4 01:11:39.459183 kubelet[2680]: I0304 01:11:39.459150 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a645f6-4149-4d70-9617-5ae452f094ab-kube-api-access-2fzx6" (OuterVolumeSpecName: "kube-api-access-2fzx6") pod "c5a645f6-4149-4d70-9617-5ae452f094ab" (UID: "c5a645f6-4149-4d70-9617-5ae452f094ab"). InnerVolumeSpecName "kube-api-access-2fzx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 01:11:39.520177 containerd[1585]: 2026-03-04 01:11:39.345 [INFO][3902] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Mar 4 01:11:39.520177 containerd[1585]: 2026-03-04 01:11:39.345 [INFO][3902] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" iface="eth0" netns="/var/run/netns/cni-d8642993-7e37-e9be-4a8c-f6d997f48b65" Mar 4 01:11:39.520177 containerd[1585]: 2026-03-04 01:11:39.345 [INFO][3902] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" iface="eth0" netns="/var/run/netns/cni-d8642993-7e37-e9be-4a8c-f6d997f48b65" Mar 4 01:11:39.520177 containerd[1585]: 2026-03-04 01:11:39.345 [INFO][3902] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" iface="eth0" netns="/var/run/netns/cni-d8642993-7e37-e9be-4a8c-f6d997f48b65" Mar 4 01:11:39.520177 containerd[1585]: 2026-03-04 01:11:39.346 [INFO][3902] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Mar 4 01:11:39.520177 containerd[1585]: 2026-03-04 01:11:39.346 [INFO][3902] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Mar 4 01:11:39.520177 containerd[1585]: 2026-03-04 01:11:39.402 [INFO][3998] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" HandleID="k8s-pod-network.97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Workload="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:11:39.520177 containerd[1585]: 2026-03-04 01:11:39.403 [INFO][3998] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:11:39.520177 containerd[1585]: 2026-03-04 01:11:39.403 [INFO][3998] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:11:39.520177 containerd[1585]: 2026-03-04 01:11:39.424 [WARNING][3998] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" HandleID="k8s-pod-network.97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Workload="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:11:39.520177 containerd[1585]: 2026-03-04 01:11:39.424 [INFO][3998] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" HandleID="k8s-pod-network.97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Workload="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:11:39.520177 containerd[1585]: 2026-03-04 01:11:39.508 [INFO][3998] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:11:39.520177 containerd[1585]: 2026-03-04 01:11:39.515 [INFO][3902] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Mar 4 01:11:39.520767 containerd[1585]: time="2026-03-04T01:11:39.520463002Z" level=info msg="TearDown network for sandbox \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\" successfully" Mar 4 01:11:39.520767 containerd[1585]: time="2026-03-04T01:11:39.520502647Z" level=info msg="StopPodSandbox for \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\" returns successfully" Mar 4 01:11:39.522655 containerd[1585]: time="2026-03-04T01:11:39.522630080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657b845487-f9ph9,Uid:2af07391-9b64-40af-835a-1d50f76831e1,Namespace:calico-system,Attempt:1,}" Mar 4 01:11:39.533139 containerd[1585]: 2026-03-04 01:11:39.326 [INFO][3931] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Mar 4 01:11:39.533139 containerd[1585]: 2026-03-04 01:11:39.326 [INFO][3931] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" iface="eth0" netns="/var/run/netns/cni-081f96bf-6bce-5f3e-388d-fe35bd6408f2" Mar 4 01:11:39.533139 containerd[1585]: 2026-03-04 01:11:39.327 [INFO][3931] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" iface="eth0" netns="/var/run/netns/cni-081f96bf-6bce-5f3e-388d-fe35bd6408f2" Mar 4 01:11:39.533139 containerd[1585]: 2026-03-04 01:11:39.328 [INFO][3931] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" iface="eth0" netns="/var/run/netns/cni-081f96bf-6bce-5f3e-388d-fe35bd6408f2" Mar 4 01:11:39.533139 containerd[1585]: 2026-03-04 01:11:39.328 [INFO][3931] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Mar 4 01:11:39.533139 containerd[1585]: 2026-03-04 01:11:39.328 [INFO][3931] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Mar 4 01:11:39.533139 containerd[1585]: 2026-03-04 01:11:39.440 [INFO][3991] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" HandleID="k8s-pod-network.139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Workload="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:11:39.533139 containerd[1585]: 2026-03-04 01:11:39.440 [INFO][3991] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:11:39.533139 containerd[1585]: 2026-03-04 01:11:39.508 [INFO][3991] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:11:39.533139 containerd[1585]: 2026-03-04 01:11:39.519 [WARNING][3991] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" HandleID="k8s-pod-network.139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Workload="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:11:39.533139 containerd[1585]: 2026-03-04 01:11:39.519 [INFO][3991] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" HandleID="k8s-pod-network.139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Workload="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:11:39.533139 containerd[1585]: 2026-03-04 01:11:39.526 [INFO][3991] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:11:39.533139 containerd[1585]: 2026-03-04 01:11:39.530 [INFO][3931] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Mar 4 01:11:39.533875 containerd[1585]: time="2026-03-04T01:11:39.533820242Z" level=info msg="TearDown network for sandbox \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\" successfully" Mar 4 01:11:39.533875 containerd[1585]: time="2026-03-04T01:11:39.533867750Z" level=info msg="StopPodSandbox for \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\" returns successfully" Mar 4 01:11:39.535057 containerd[1585]: time="2026-03-04T01:11:39.535026930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657b845487-rwtgg,Uid:03c35491-5bc5-4312-a5b8-da3b8fa8bdbc,Namespace:calico-system,Attempt:1,}" Mar 4 01:11:39.553962 kubelet[2680]: I0304 01:11:39.553467 2680 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c5a645f6-4149-4d70-9617-5ae452f094ab-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 4 01:11:39.553962 kubelet[2680]: I0304 01:11:39.553544 2680 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5a645f6-4149-4d70-9617-5ae452f094ab-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 4 01:11:39.553962 kubelet[2680]: I0304 01:11:39.553570 2680 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2fzx6\" (UniqueName: \"kubernetes.io/projected/c5a645f6-4149-4d70-9617-5ae452f094ab-kube-api-access-2fzx6\") on node \"localhost\" DevicePath \"\"" Mar 4 01:11:39.553962 kubelet[2680]: I0304 01:11:39.553588 2680 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5a645f6-4149-4d70-9617-5ae452f094ab-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 4 01:11:39.607274 systemd-networkd[1249]: cali03b58da7289: Link UP Mar 4 01:11:39.607599 systemd-networkd[1249]: cali03b58da7289: Gained carrier Mar 4 01:11:39.633196 containerd[1585]: time="2026-03-04T01:11:39.632854630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4rchx,Uid:d7d28883-d73e-4c89-9e3a-693929b745d0,Namespace:calico-system,Attempt:0,}" Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.319 [ERROR][3933] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.349 [INFO][3933] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--nm8vg-eth0 goldmane-5b85766d88- calico-system f48d5d14-c523-4a8c-a5fc-905a4ef84d9b 877 0 2026-03-04 01:11:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-nm8vg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali03b58da7289 [] [] }} ContainerID="bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" Namespace="calico-system" Pod="goldmane-5b85766d88-nm8vg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nm8vg-" Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.349 [INFO][3933] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" Namespace="calico-system" Pod="goldmane-5b85766d88-nm8vg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nm8vg-eth0" Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.416 [INFO][4008] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" HandleID="k8s-pod-network.bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" Workload="localhost-k8s-goldmane--5b85766d88--nm8vg-eth0" Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.513 [INFO][4008] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" HandleID="k8s-pod-network.bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" Workload="localhost-k8s-goldmane--5b85766d88--nm8vg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00010e2c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-nm8vg", "timestamp":"2026-03-04 01:11:39.416202943 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005f8dc0)} Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.513 [INFO][4008] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.526 [INFO][4008] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.527 [INFO][4008] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.532 [INFO][4008] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" host="localhost" Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.539 [INFO][4008] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.548 [INFO][4008] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.552 [INFO][4008] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.556 [INFO][4008] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.556 [INFO][4008] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" host="localhost" Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.560 [INFO][4008] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479 Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.571 [INFO][4008] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" host="localhost" Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.577 [INFO][4008] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" host="localhost" Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.578 [INFO][4008] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" host="localhost" Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.578 [INFO][4008] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:11:39.636015 containerd[1585]: 2026-03-04 01:11:39.578 [INFO][4008] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" HandleID="k8s-pod-network.bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" Workload="localhost-k8s-goldmane--5b85766d88--nm8vg-eth0" Mar 4 01:11:39.636835 containerd[1585]: 2026-03-04 01:11:39.588 [INFO][3933] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" Namespace="calico-system" Pod="goldmane-5b85766d88-nm8vg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nm8vg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--nm8vg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"f48d5d14-c523-4a8c-a5fc-905a4ef84d9b", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-nm8vg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali03b58da7289", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:39.636835 containerd[1585]: 2026-03-04 01:11:39.588 [INFO][3933] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" Namespace="calico-system" Pod="goldmane-5b85766d88-nm8vg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nm8vg-eth0" Mar 4 01:11:39.636835 containerd[1585]: 2026-03-04 01:11:39.588 [INFO][3933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03b58da7289 ContainerID="bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" Namespace="calico-system" Pod="goldmane-5b85766d88-nm8vg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nm8vg-eth0" Mar 4 01:11:39.636835 containerd[1585]: 2026-03-04 01:11:39.614 [INFO][3933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" Namespace="calico-system" Pod="goldmane-5b85766d88-nm8vg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nm8vg-eth0" Mar 4 01:11:39.636835 containerd[1585]: 2026-03-04 01:11:39.614 [INFO][3933] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" Namespace="calico-system" Pod="goldmane-5b85766d88-nm8vg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nm8vg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--nm8vg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"f48d5d14-c523-4a8c-a5fc-905a4ef84d9b", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479", Pod:"goldmane-5b85766d88-nm8vg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali03b58da7289", MAC:"c2:74:0e:f1:ee:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:39.636835 containerd[1585]: 2026-03-04 01:11:39.630 [INFO][3933] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479" Namespace="calico-system" Pod="goldmane-5b85766d88-nm8vg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--nm8vg-eth0" Mar 4 01:11:39.703203 containerd[1585]: time="2026-03-04T01:11:39.703073905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:11:39.703621 containerd[1585]: time="2026-03-04T01:11:39.703525316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:11:39.703816 containerd[1585]: time="2026-03-04T01:11:39.703738182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:39.704482 containerd[1585]: time="2026-03-04T01:11:39.704451140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:39.712191 systemd-networkd[1249]: cali43b8f6a1afb: Link UP Mar 4 01:11:39.719456 systemd-networkd[1249]: cali43b8f6a1afb: Gained carrier Mar 4 01:11:39.747067 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.318 [ERROR][3944] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.351 [INFO][3944] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--7998p-eth0 coredns-674b8bbfcf- kube-system c8183cfe-8bfc-4067-aea8-5244cdfa9694 878 0 2026-03-04 01:11:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-7998p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali43b8f6a1afb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7998p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7998p-" Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.351 [INFO][3944] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7998p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7998p-eth0" Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.448 [INFO][4007] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" HandleID="k8s-pod-network.a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" Workload="localhost-k8s-coredns--674b8bbfcf--7998p-eth0" Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.513 [INFO][4007] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" HandleID="k8s-pod-network.a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" Workload="localhost-k8s-coredns--674b8bbfcf--7998p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d8270), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-7998p", "timestamp":"2026-03-04 01:11:39.448379231 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000196580)} Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.514 [INFO][4007] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.578 [INFO][4007] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.578 [INFO][4007] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.634 [INFO][4007] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" host="localhost" Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.649 [INFO][4007] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.659 [INFO][4007] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.662 [INFO][4007] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.665 [INFO][4007] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.666 [INFO][4007] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" host="localhost" Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.668 [INFO][4007] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.675 [INFO][4007] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" host="localhost" Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.687 [INFO][4007] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" host="localhost" Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.687 [INFO][4007] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" host="localhost" Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.687 [INFO][4007] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:11:39.766600 containerd[1585]: 2026-03-04 01:11:39.687 [INFO][4007] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" HandleID="k8s-pod-network.a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" Workload="localhost-k8s-coredns--674b8bbfcf--7998p-eth0" Mar 4 01:11:39.767272 containerd[1585]: 2026-03-04 01:11:39.691 [INFO][3944] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7998p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7998p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7998p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8183cfe-8bfc-4067-aea8-5244cdfa9694", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-7998p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43b8f6a1afb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:39.767272 containerd[1585]: 2026-03-04 01:11:39.692 [INFO][3944] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7998p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7998p-eth0" Mar 4 01:11:39.767272 containerd[1585]: 2026-03-04 01:11:39.692 [INFO][3944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43b8f6a1afb ContainerID="a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7998p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7998p-eth0" Mar 4 01:11:39.767272 containerd[1585]: 2026-03-04 01:11:39.718 [INFO][3944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7998p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7998p-eth0" Mar 4 01:11:39.767272 containerd[1585]: 2026-03-04 01:11:39.727 [INFO][3944] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7998p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7998p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7998p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8183cfe-8bfc-4067-aea8-5244cdfa9694", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d", Pod:"coredns-674b8bbfcf-7998p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43b8f6a1afb", MAC:"ca:aa:9f:6b:6a:5b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:39.767272 containerd[1585]: 2026-03-04 01:11:39.749 [INFO][3944] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-7998p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7998p-eth0" Mar 4 01:11:39.801275 containerd[1585]: time="2026-03-04T01:11:39.800724776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-nm8vg,Uid:f48d5d14-c523-4a8c-a5fc-905a4ef84d9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479\"" Mar 4 01:11:39.803361 containerd[1585]: time="2026-03-04T01:11:39.803337103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 4 01:11:39.840897 containerd[1585]: time="2026-03-04T01:11:39.840057216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:11:39.840897 containerd[1585]: time="2026-03-04T01:11:39.840158875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:11:39.840897 containerd[1585]: time="2026-03-04T01:11:39.840169205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:39.840897 containerd[1585]: time="2026-03-04T01:11:39.840350993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:39.847939 systemd-networkd[1249]: cali6ca0aba22ee: Link UP Mar 4 01:11:39.850207 systemd-networkd[1249]: cali6ca0aba22ee: Gained carrier Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.601 [ERROR][4030] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.625 [INFO][4030] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0 calico-apiserver-657b845487- calico-system 2af07391-9b64-40af-835a-1d50f76831e1 897 0 2026-03-04 01:11:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:657b845487 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-657b845487-f9ph9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali6ca0aba22ee [] [] }} ContainerID="e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" Namespace="calico-system" Pod="calico-apiserver-657b845487-f9ph9" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--f9ph9-" Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.626 [INFO][4030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" Namespace="calico-system" Pod="calico-apiserver-657b845487-f9ph9" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.727 [INFO][4072] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" HandleID="k8s-pod-network.e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" Workload="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.758 [INFO][4072] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" HandleID="k8s-pod-network.e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" Workload="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ff70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-657b845487-f9ph9", "timestamp":"2026-03-04 01:11:39.727049913 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002162c0)} Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.758 [INFO][4072] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.759 [INFO][4072] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.759 [INFO][4072] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.762 [INFO][4072] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" host="localhost" Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.769 [INFO][4072] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.777 [INFO][4072] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.781 [INFO][4072] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.797 [INFO][4072] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.797 [INFO][4072] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" host="localhost" Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.802 [INFO][4072] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391 Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.812 [INFO][4072] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" host="localhost" Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.818 [INFO][4072] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" host="localhost" Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.818 [INFO][4072] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" host="localhost" Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.818 [INFO][4072] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:11:39.878054 containerd[1585]: 2026-03-04 01:11:39.818 [INFO][4072] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" HandleID="k8s-pod-network.e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" Workload="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:11:39.879189 containerd[1585]: 2026-03-04 01:11:39.823 [INFO][4030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" Namespace="calico-system" Pod="calico-apiserver-657b845487-f9ph9" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0", GenerateName:"calico-apiserver-657b845487-", Namespace:"calico-system", SelfLink:"", UID:"2af07391-9b64-40af-835a-1d50f76831e1", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657b845487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-657b845487-f9ph9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6ca0aba22ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:39.879189 containerd[1585]: 2026-03-04 01:11:39.823 [INFO][4030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" Namespace="calico-system" Pod="calico-apiserver-657b845487-f9ph9" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:11:39.879189 containerd[1585]: 2026-03-04 01:11:39.824 [INFO][4030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ca0aba22ee ContainerID="e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" Namespace="calico-system" Pod="calico-apiserver-657b845487-f9ph9" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:11:39.879189 containerd[1585]: 2026-03-04 01:11:39.851 [INFO][4030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" Namespace="calico-system" Pod="calico-apiserver-657b845487-f9ph9" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:11:39.879189 containerd[1585]: 2026-03-04 01:11:39.851 [INFO][4030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" Namespace="calico-system" Pod="calico-apiserver-657b845487-f9ph9" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0", GenerateName:"calico-apiserver-657b845487-", Namespace:"calico-system", SelfLink:"", UID:"2af07391-9b64-40af-835a-1d50f76831e1", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657b845487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391", Pod:"calico-apiserver-657b845487-f9ph9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6ca0aba22ee", MAC:"da:21:97:bb:14:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:39.879189 containerd[1585]: 2026-03-04 01:11:39.869 [INFO][4030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391" Namespace="calico-system" Pod="calico-apiserver-657b845487-f9ph9" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:11:39.882830 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:11:39.932226 systemd-networkd[1249]: cali94718c5ab39: Link UP Mar 4 01:11:39.932527 systemd-networkd[1249]: cali94718c5ab39: Gained carrier Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.601 [ERROR][4038] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.638 [INFO][4038] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0 calico-apiserver-657b845487- calico-system 03c35491-5bc5-4312-a5b8-da3b8fa8bdbc 896 0 2026-03-04 01:11:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:657b845487 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-657b845487-rwtgg eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali94718c5ab39 [] [] }} ContainerID="baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" Namespace="calico-system" Pod="calico-apiserver-657b845487-rwtgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--rwtgg-" Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.645 [INFO][4038] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" Namespace="calico-system" Pod="calico-apiserver-657b845487-rwtgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.785 [INFO][4080] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" HandleID="k8s-pod-network.baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" Workload="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.806 [INFO][4080] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" HandleID="k8s-pod-network.baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" Workload="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ef5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-657b845487-rwtgg", "timestamp":"2026-03-04 01:11:39.785654868 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001458c0)} Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.807 [INFO][4080] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.820 [INFO][4080] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.820 [INFO][4080] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.864 [INFO][4080] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" host="localhost" Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.872 [INFO][4080] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.881 [INFO][4080] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.885 [INFO][4080] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.889 [INFO][4080] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.889 [INFO][4080] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" host="localhost" Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.900 [INFO][4080] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1 Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.907 [INFO][4080] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" host="localhost" Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.916 [INFO][4080] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" host="localhost" Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.916 [INFO][4080] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" host="localhost" Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.916 [INFO][4080] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:11:39.956743 containerd[1585]: 2026-03-04 01:11:39.916 [INFO][4080] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" HandleID="k8s-pod-network.baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" Workload="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:11:39.957385 containerd[1585]: 2026-03-04 01:11:39.925 [INFO][4038] cni-plugin/k8s.go 418: Populated endpoint ContainerID="baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" Namespace="calico-system" Pod="calico-apiserver-657b845487-rwtgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0", GenerateName:"calico-apiserver-657b845487-", Namespace:"calico-system", SelfLink:"", UID:"03c35491-5bc5-4312-a5b8-da3b8fa8bdbc", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657b845487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-657b845487-rwtgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali94718c5ab39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:39.957385 containerd[1585]: 2026-03-04 01:11:39.925 [INFO][4038] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" Namespace="calico-system" Pod="calico-apiserver-657b845487-rwtgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:11:39.957385 containerd[1585]: 2026-03-04 01:11:39.925 [INFO][4038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94718c5ab39 ContainerID="baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" Namespace="calico-system" Pod="calico-apiserver-657b845487-rwtgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:11:39.957385 containerd[1585]: 2026-03-04 01:11:39.932 [INFO][4038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" Namespace="calico-system" Pod="calico-apiserver-657b845487-rwtgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:11:39.957385 containerd[1585]: 2026-03-04 01:11:39.934 [INFO][4038] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" Namespace="calico-system" Pod="calico-apiserver-657b845487-rwtgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0", GenerateName:"calico-apiserver-657b845487-", Namespace:"calico-system", SelfLink:"", UID:"03c35491-5bc5-4312-a5b8-da3b8fa8bdbc", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657b845487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1", Pod:"calico-apiserver-657b845487-rwtgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali94718c5ab39", MAC:"76:7f:ef:67:15:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:39.957385 containerd[1585]: 2026-03-04 01:11:39.951 [INFO][4038] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1" Namespace="calico-system" Pod="calico-apiserver-657b845487-rwtgg" WorkloadEndpoint="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:11:39.960262 containerd[1585]: time="2026-03-04T01:11:39.958725834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:11:39.960262 containerd[1585]: time="2026-03-04T01:11:39.958866675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:11:39.960262 containerd[1585]: time="2026-03-04T01:11:39.958883557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:39.960262 containerd[1585]: time="2026-03-04T01:11:39.959152788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:40.017490 systemd[1]: run-netns-cni\x2dd8642993\x2d7e37\x2de9be\x2d4a8c\x2df6d997f48b65.mount: Deactivated successfully. Mar 4 01:11:40.017697 systemd[1]: run-netns-cni\x2d081f96bf\x2d6bce\x2d5f3e\x2d388d\x2dfe35bd6408f2.mount: Deactivated successfully. Mar 4 01:11:40.017842 systemd[1]: run-netns-cni\x2d07993e0d\x2d776f\x2d8219\x2d4302\x2d32a3f77e712d.mount: Deactivated successfully. Mar 4 01:11:40.019609 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2591c2a2390db6eb45f2b6b48dc8096e502d8e7c4e44d1edcaca267beda6d180-shm.mount: Deactivated successfully. Mar 4 01:11:40.019821 systemd[1]: var-lib-kubelet-pods-c5a645f6\x2d4149\x2d4d70\x2d9617\x2d5ae452f094ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2fzx6.mount: Deactivated successfully. Mar 4 01:11:40.021279 systemd[1]: var-lib-kubelet-pods-c5a645f6\x2d4149\x2d4d70\x2d9617\x2d5ae452f094ab-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 4 01:11:40.027541 containerd[1585]: time="2026-03-04T01:11:40.026684872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:11:40.027541 containerd[1585]: time="2026-03-04T01:11:40.026757959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:11:40.027541 containerd[1585]: time="2026-03-04T01:11:40.026794717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:40.027541 containerd[1585]: time="2026-03-04T01:11:40.026947953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:40.033300 containerd[1585]: time="2026-03-04T01:11:40.031653451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7998p,Uid:c8183cfe-8bfc-4067-aea8-5244cdfa9694,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d\"" Mar 4 01:11:40.035623 kubelet[2680]: E0304 01:11:40.035573 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:40.036914 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:11:40.056827 containerd[1585]: time="2026-03-04T01:11:40.056510274Z" level=info msg="CreateContainer within sandbox \"a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:11:40.077913 systemd-networkd[1249]: cali7d7991bd510: Link UP Mar 4 01:11:40.083035 systemd-networkd[1249]: cali7d7991bd510: Gained carrier Mar 4 01:11:40.094615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4083147701.mount: Deactivated successfully. Mar 4 01:11:40.111265 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:11:40.112057 containerd[1585]: time="2026-03-04T01:11:40.111876819Z" level=info msg="CreateContainer within sandbox \"a1cfaa148a7552d4a6f2bc704b487de2dcc5e8e2c814b8b83a31783a9cebaf4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"391d695a4e11b32a9b39f2c60b9d0661dda99c3782e299c4b0d788740b66befc\"" Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:39.732 [ERROR][4081] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:39.754 [INFO][4081] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4rchx-eth0 csi-node-driver- calico-system d7d28883-d73e-4c89-9e3a-693929b745d0 712 0 2026-03-04 01:11:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4rchx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7d7991bd510 [] [] }} ContainerID="8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" Namespace="calico-system" Pod="csi-node-driver-4rchx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4rchx-" Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:39.755 [INFO][4081] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" Namespace="calico-system" Pod="csi-node-driver-4rchx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4rchx-eth0" Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:39.839 [INFO][4140] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" HandleID="k8s-pod-network.8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" Workload="localhost-k8s-csi--node--driver--4rchx-eth0" Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:39.849 [INFO][4140] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" HandleID="k8s-pod-network.8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" Workload="localhost-k8s-csi--node--driver--4rchx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050eea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4rchx", "timestamp":"2026-03-04 01:11:39.839769129 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002162c0)} Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:39.849 [INFO][4140] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:39.917 [INFO][4140] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:39.917 [INFO][4140] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:39.964 [INFO][4140] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" host="localhost" Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:39.977 [INFO][4140] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:39.988 [INFO][4140] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:39.995 [INFO][4140] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:40.003 [INFO][4140] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:40.005 [INFO][4140] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" host="localhost" Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:40.008 [INFO][4140] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:40.020 [INFO][4140] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" host="localhost" Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:40.040 [INFO][4140] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" host="localhost" Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:40.041 [INFO][4140] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" host="localhost" Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:40.041 [INFO][4140] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:11:40.116912 containerd[1585]: 2026-03-04 01:11:40.041 [INFO][4140] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" HandleID="k8s-pod-network.8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" Workload="localhost-k8s-csi--node--driver--4rchx-eth0" Mar 4 01:11:40.117569 containerd[1585]: 2026-03-04 01:11:40.060 [INFO][4081] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" Namespace="calico-system" Pod="csi-node-driver-4rchx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4rchx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4rchx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7d28883-d73e-4c89-9e3a-693929b745d0", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4rchx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7d7991bd510", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:40.117569 containerd[1585]: 2026-03-04 01:11:40.064 [INFO][4081] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" Namespace="calico-system" Pod="csi-node-driver-4rchx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4rchx-eth0" Mar 4 01:11:40.117569 containerd[1585]: 2026-03-04 01:11:40.064 [INFO][4081] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d7991bd510 ContainerID="8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" Namespace="calico-system" Pod="csi-node-driver-4rchx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4rchx-eth0" Mar 4 01:11:40.117569 containerd[1585]: 2026-03-04 01:11:40.084 [INFO][4081] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" Namespace="calico-system" Pod="csi-node-driver-4rchx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4rchx-eth0" Mar 4 01:11:40.117569 containerd[1585]: 2026-03-04 01:11:40.084 [INFO][4081] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" Namespace="calico-system" Pod="csi-node-driver-4rchx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4rchx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4rchx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7d28883-d73e-4c89-9e3a-693929b745d0", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b", Pod:"csi-node-driver-4rchx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7d7991bd510", MAC:"56:56:66:8d:41:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:40.117569 containerd[1585]: 2026-03-04 01:11:40.107 [INFO][4081] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b" Namespace="calico-system" Pod="csi-node-driver-4rchx" WorkloadEndpoint="localhost-k8s-csi--node--driver--4rchx-eth0" Mar 4 01:11:40.119882 containerd[1585]: time="2026-03-04T01:11:40.119742683Z" level=info msg="StartContainer for \"391d695a4e11b32a9b39f2c60b9d0661dda99c3782e299c4b0d788740b66befc\"" Mar 4 01:11:40.129833 containerd[1585]: time="2026-03-04T01:11:40.129751797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657b845487-f9ph9,Uid:2af07391-9b64-40af-835a-1d50f76831e1,Namespace:calico-system,Attempt:1,} returns sandbox id \"e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391\"" Mar 4 01:11:40.175581 kubelet[2680]: E0304 01:11:40.175459 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:40.181719 containerd[1585]: time="2026-03-04T01:11:40.181648253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lxbhh,Uid:f044584a-0762-48e5-86e5-bdd14b596aa5,Namespace:kube-system,Attempt:0,}" Mar 4 01:11:40.214745 containerd[1585]: time="2026-03-04T01:11:40.211310699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5645dcc89f-zmqfq,Uid:02ad6776-d910-4946-a34e-593710285244,Namespace:calico-system,Attempt:0,}" Mar 4 01:11:40.260592 containerd[1585]: time="2026-03-04T01:11:40.253214339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:11:40.260592 containerd[1585]: time="2026-03-04T01:11:40.253293686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:11:40.260592 containerd[1585]: time="2026-03-04T01:11:40.253314134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:40.260592 containerd[1585]: time="2026-03-04T01:11:40.253588635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:40.396537 containerd[1585]: time="2026-03-04T01:11:40.384525507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657b845487-rwtgg,Uid:03c35491-5bc5-4312-a5b8-da3b8fa8bdbc,Namespace:calico-system,Attempt:1,} returns sandbox id \"baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1\"" Mar 4 01:11:40.450352 containerd[1585]: time="2026-03-04T01:11:40.449897270Z" level=info msg="StartContainer for \"391d695a4e11b32a9b39f2c60b9d0661dda99c3782e299c4b0d788740b66befc\" returns successfully" Mar 4 01:11:40.480281 kubelet[2680]: I0304 01:11:40.480130 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c7f256b2-92d4-4175-8d39-716937c904f0-nginx-config\") pod \"whisker-8df47bb96-l27qz\" (UID: \"c7f256b2-92d4-4175-8d39-716937c904f0\") " pod="calico-system/whisker-8df47bb96-l27qz" Mar 4 01:11:40.480281 kubelet[2680]: I0304 01:11:40.480194 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7f256b2-92d4-4175-8d39-716937c904f0-whisker-ca-bundle\") pod \"whisker-8df47bb96-l27qz\" (UID: \"c7f256b2-92d4-4175-8d39-716937c904f0\") " pod="calico-system/whisker-8df47bb96-l27qz" Mar 4 01:11:40.480281 kubelet[2680]: I0304 01:11:40.480225 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knbr4\" (UniqueName: \"kubernetes.io/projected/c7f256b2-92d4-4175-8d39-716937c904f0-kube-api-access-knbr4\") pod \"whisker-8df47bb96-l27qz\" (UID: \"c7f256b2-92d4-4175-8d39-716937c904f0\") " pod="calico-system/whisker-8df47bb96-l27qz" Mar 4 01:11:40.480281 kubelet[2680]: I0304 01:11:40.480242 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7f256b2-92d4-4175-8d39-716937c904f0-whisker-backend-key-pair\") pod \"whisker-8df47bb96-l27qz\" (UID: \"c7f256b2-92d4-4175-8d39-716937c904f0\") " pod="calico-system/whisker-8df47bb96-l27qz" Mar 4 01:11:40.512210 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:11:40.667911 kubelet[2680]: I0304 01:11:40.667513 2680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a645f6-4149-4d70-9617-5ae452f094ab" path="/var/lib/kubelet/pods/c5a645f6-4149-4d70-9617-5ae452f094ab/volumes" Mar 4 01:11:40.720744 containerd[1585]: time="2026-03-04T01:11:40.719940477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8df47bb96-l27qz,Uid:c7f256b2-92d4-4175-8d39-716937c904f0,Namespace:calico-system,Attempt:0,}" Mar 4 01:11:40.786745 containerd[1585]: time="2026-03-04T01:11:40.786538504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4rchx,Uid:d7d28883-d73e-4c89-9e3a-693929b745d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b\"" Mar 4 01:11:41.035195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1907889344.mount: Deactivated successfully. Mar 4 01:11:41.059748 systemd-networkd[1249]: cali760d15834aa: Link UP Mar 4 01:11:41.061228 systemd-networkd[1249]: cali760d15834aa: Gained carrier Mar 4 01:11:41.086556 systemd-networkd[1249]: cali94718c5ab39: Gained IPv6LL Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.536 [ERROR][4463] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.563 [INFO][4463] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0 calico-kube-controllers-5645dcc89f- calico-system 02ad6776-d910-4946-a34e-593710285244 875 0 2026-03-04 01:11:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5645dcc89f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5645dcc89f-zmqfq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali760d15834aa [] [] }} ContainerID="879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" Namespace="calico-system" Pod="calico-kube-controllers-5645dcc89f-zmqfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-" Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.563 [INFO][4463] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" Namespace="calico-system" Pod="calico-kube-controllers-5645dcc89f-zmqfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0" Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.800 [INFO][4535] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" HandleID="k8s-pod-network.879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" Workload="localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0" Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.839 [INFO][4535] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" HandleID="k8s-pod-network.879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" Workload="localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003abf10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5645dcc89f-zmqfq", "timestamp":"2026-03-04 01:11:40.800774343 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e14a0)} Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.839 [INFO][4535] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.839 [INFO][4535] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.839 [INFO][4535] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.852 [INFO][4535] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" host="localhost" Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.878 [INFO][4535] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.906 [INFO][4535] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.924 [INFO][4535] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.935 [INFO][4535] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.935 [INFO][4535] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" host="localhost" Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.947 [INFO][4535] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54 Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.977 [INFO][4535] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" host="localhost" Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.999 [INFO][4535] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" host="localhost" Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.999 [INFO][4535] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" host="localhost" Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.999 [INFO][4535] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:11:41.112164 containerd[1585]: 2026-03-04 01:11:40.999 [INFO][4535] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" HandleID="k8s-pod-network.879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" Workload="localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0" Mar 4 01:11:41.114499 containerd[1585]: 2026-03-04 01:11:41.044 [INFO][4463] cni-plugin/k8s.go 418: Populated endpoint ContainerID="879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" Namespace="calico-system" Pod="calico-kube-controllers-5645dcc89f-zmqfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0", GenerateName:"calico-kube-controllers-5645dcc89f-", Namespace:"calico-system", SelfLink:"", UID:"02ad6776-d910-4946-a34e-593710285244", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5645dcc89f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5645dcc89f-zmqfq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali760d15834aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:41.114499 containerd[1585]: 2026-03-04 01:11:41.051 [INFO][4463] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" Namespace="calico-system" Pod="calico-kube-controllers-5645dcc89f-zmqfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0" Mar 4 01:11:41.114499 containerd[1585]: 2026-03-04 01:11:41.051 [INFO][4463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali760d15834aa ContainerID="879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" Namespace="calico-system" Pod="calico-kube-controllers-5645dcc89f-zmqfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0" Mar 4 01:11:41.114499 containerd[1585]: 2026-03-04 01:11:41.063 [INFO][4463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" Namespace="calico-system" Pod="calico-kube-controllers-5645dcc89f-zmqfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0" Mar 4 01:11:41.114499 containerd[1585]: 2026-03-04 01:11:41.064 [INFO][4463] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" Namespace="calico-system" Pod="calico-kube-controllers-5645dcc89f-zmqfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0", GenerateName:"calico-kube-controllers-5645dcc89f-", Namespace:"calico-system", SelfLink:"", UID:"02ad6776-d910-4946-a34e-593710285244", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5645dcc89f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54", Pod:"calico-kube-controllers-5645dcc89f-zmqfq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali760d15834aa", MAC:"02:a4:ca:23:5a:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:41.114499 containerd[1585]: 2026-03-04 01:11:41.089 [INFO][4463] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54" Namespace="calico-system" Pod="calico-kube-controllers-5645dcc89f-zmqfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5645dcc89f--zmqfq-eth0" Mar 4 01:11:41.150433 kernel: calico-node[4458]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 4 01:11:41.187351 containerd[1585]: time="2026-03-04T01:11:41.187020200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:11:41.187351 containerd[1585]: time="2026-03-04T01:11:41.187141436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:11:41.187351 containerd[1585]: time="2026-03-04T01:11:41.187157717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:41.189825 containerd[1585]: time="2026-03-04T01:11:41.187327542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:41.202179 kubelet[2680]: E0304 01:11:41.202041 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:41.221726 kubelet[2680]: I0304 01:11:41.219972 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7998p" podStartSLOduration=28.219954314 podStartE2EDuration="28.219954314s" podCreationTimestamp="2026-03-04 01:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:11:41.218778163 +0000 UTC m=+32.837109635" watchObservedRunningTime="2026-03-04 01:11:41.219954314 +0000 UTC m=+32.838285786" Mar 4 01:11:41.261552 systemd-networkd[1249]: calid7b5e2c7fc4: Link UP Mar 4 01:11:41.267550 systemd-networkd[1249]: calid7b5e2c7fc4: Gained carrier Mar 4 01:11:41.280152 systemd-networkd[1249]: cali6ca0aba22ee: Gained IPv6LL Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:40.578 [ERROR][4468] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:40.622 [INFO][4468] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0 coredns-674b8bbfcf- kube-system f044584a-0762-48e5-86e5-bdd14b596aa5 876 0 2026-03-04 01:11:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-lxbhh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid7b5e2c7fc4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" Namespace="kube-system" Pod="coredns-674b8bbfcf-lxbhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lxbhh-" Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:40.622 [INFO][4468] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" Namespace="kube-system" Pod="coredns-674b8bbfcf-lxbhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0" Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.085 [INFO][4546] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" HandleID="k8s-pod-network.60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" Workload="localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0" Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.108 [INFO][4546] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" HandleID="k8s-pod-network.60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" Workload="localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139570), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-lxbhh", "timestamp":"2026-03-04 01:11:41.085349665 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005582c0)} Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.109 [INFO][4546] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.109 [INFO][4546] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.109 [INFO][4546] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.114 [INFO][4546] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" host="localhost" Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.125 [INFO][4546] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.135 [INFO][4546] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.138 [INFO][4546] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.143 [INFO][4546] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.144 [INFO][4546] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" host="localhost" Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.152 [INFO][4546] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2 Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.157 [INFO][4546] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" host="localhost" Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.172 [INFO][4546] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" host="localhost" Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.172 [INFO][4546] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" host="localhost" Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.172 [INFO][4546] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:11:41.336541 containerd[1585]: 2026-03-04 01:11:41.172 [INFO][4546] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" HandleID="k8s-pod-network.60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" Workload="localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0" Mar 4 01:11:41.337485 containerd[1585]: 2026-03-04 01:11:41.215 [INFO][4468] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" Namespace="kube-system" Pod="coredns-674b8bbfcf-lxbhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f044584a-0762-48e5-86e5-bdd14b596aa5", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-lxbhh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7b5e2c7fc4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:41.337485 containerd[1585]: 2026-03-04 01:11:41.216 [INFO][4468] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" Namespace="kube-system" Pod="coredns-674b8bbfcf-lxbhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0" Mar 4 01:11:41.337485 containerd[1585]: 2026-03-04 01:11:41.216 [INFO][4468] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7b5e2c7fc4 ContainerID="60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" Namespace="kube-system" Pod="coredns-674b8bbfcf-lxbhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0" Mar 4 01:11:41.337485 containerd[1585]: 2026-03-04 01:11:41.272 [INFO][4468] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" Namespace="kube-system" Pod="coredns-674b8bbfcf-lxbhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0" Mar 4 01:11:41.337485 containerd[1585]: 2026-03-04 01:11:41.277 [INFO][4468] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" Namespace="kube-system" Pod="coredns-674b8bbfcf-lxbhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f044584a-0762-48e5-86e5-bdd14b596aa5", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2", Pod:"coredns-674b8bbfcf-lxbhh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7b5e2c7fc4", MAC:"46:bc:d0:32:c7:31", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:41.337485 containerd[1585]: 2026-03-04 01:11:41.318 [INFO][4468] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2" Namespace="kube-system" Pod="coredns-674b8bbfcf-lxbhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lxbhh-eth0" Mar 4 01:11:41.384371 systemd[1]: run-containerd-runc-k8s.io-879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54-runc.He4KDS.mount: Deactivated successfully. Mar 4 01:11:41.409251 systemd-networkd[1249]: cali7d7991bd510: Gained IPv6LL Mar 4 01:11:41.409589 systemd-networkd[1249]: cali43b8f6a1afb: Gained IPv6LL Mar 4 01:11:41.446234 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:11:41.460181 systemd-networkd[1249]: calidf18eaa1b6e: Link UP Mar 4 01:11:41.462342 systemd-networkd[1249]: calidf18eaa1b6e: Gained carrier Mar 4 01:11:41.471573 systemd-networkd[1249]: cali03b58da7289: Gained IPv6LL Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.051 [INFO][4565] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8df47bb96--l27qz-eth0 whisker-8df47bb96- calico-system c7f256b2-92d4-4175-8d39-716937c904f0 937 0 2026-03-04 01:11:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8df47bb96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8df47bb96-l27qz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidf18eaa1b6e [] [] }} ContainerID="71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" Namespace="calico-system" Pod="whisker-8df47bb96-l27qz" WorkloadEndpoint="localhost-k8s-whisker--8df47bb96--l27qz-" Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.051 [INFO][4565] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" Namespace="calico-system" Pod="whisker-8df47bb96-l27qz" WorkloadEndpoint="localhost-k8s-whisker--8df47bb96--l27qz-eth0" Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.140 [INFO][4612] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" HandleID="k8s-pod-network.71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" Workload="localhost-k8s-whisker--8df47bb96--l27qz-eth0" Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.156 [INFO][4612] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" HandleID="k8s-pod-network.71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" Workload="localhost-k8s-whisker--8df47bb96--l27qz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee3d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8df47bb96-l27qz", "timestamp":"2026-03-04 01:11:41.140690508 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000350580)} Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.156 [INFO][4612] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.172 [INFO][4612] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.172 [INFO][4612] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.231 [INFO][4612] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" host="localhost" Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.296 [INFO][4612] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.372 [INFO][4612] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.400 [INFO][4612] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.414 [INFO][4612] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.415 [INFO][4612] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" host="localhost" Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.419 [INFO][4612] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.427 [INFO][4612] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" host="localhost" Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.445 [INFO][4612] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" host="localhost" Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.445 [INFO][4612] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" host="localhost" Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.445 [INFO][4612] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:11:41.519209 containerd[1585]: 2026-03-04 01:11:41.445 [INFO][4612] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" HandleID="k8s-pod-network.71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" Workload="localhost-k8s-whisker--8df47bb96--l27qz-eth0" Mar 4 01:11:41.526346 containerd[1585]: 2026-03-04 01:11:41.450 [INFO][4565] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" Namespace="calico-system" Pod="whisker-8df47bb96-l27qz" WorkloadEndpoint="localhost-k8s-whisker--8df47bb96--l27qz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8df47bb96--l27qz-eth0", GenerateName:"whisker-8df47bb96-", Namespace:"calico-system", SelfLink:"", UID:"c7f256b2-92d4-4175-8d39-716937c904f0", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8df47bb96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8df47bb96-l27qz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf18eaa1b6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:41.526346 containerd[1585]: 2026-03-04 01:11:41.450 [INFO][4565] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" Namespace="calico-system" Pod="whisker-8df47bb96-l27qz" WorkloadEndpoint="localhost-k8s-whisker--8df47bb96--l27qz-eth0" Mar 4 01:11:41.526346 containerd[1585]: 2026-03-04 01:11:41.450 [INFO][4565] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf18eaa1b6e ContainerID="71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" Namespace="calico-system" Pod="whisker-8df47bb96-l27qz" WorkloadEndpoint="localhost-k8s-whisker--8df47bb96--l27qz-eth0" Mar 4 01:11:41.526346 containerd[1585]: 2026-03-04 01:11:41.462 [INFO][4565] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" Namespace="calico-system" Pod="whisker-8df47bb96-l27qz" WorkloadEndpoint="localhost-k8s-whisker--8df47bb96--l27qz-eth0" Mar 4 01:11:41.526346 containerd[1585]: 2026-03-04 01:11:41.463 [INFO][4565] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" Namespace="calico-system" Pod="whisker-8df47bb96-l27qz" WorkloadEndpoint="localhost-k8s-whisker--8df47bb96--l27qz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8df47bb96--l27qz-eth0", GenerateName:"whisker-8df47bb96-", Namespace:"calico-system", SelfLink:"", UID:"c7f256b2-92d4-4175-8d39-716937c904f0", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8df47bb96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c", Pod:"whisker-8df47bb96-l27qz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf18eaa1b6e", MAC:"ee:a8:d9:40:b9:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:11:41.526346 containerd[1585]: 2026-03-04 01:11:41.482 [INFO][4565] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c" Namespace="calico-system" Pod="whisker-8df47bb96-l27qz" WorkloadEndpoint="localhost-k8s-whisker--8df47bb96--l27qz-eth0" Mar 4 01:11:41.543715 containerd[1585]: time="2026-03-04T01:11:41.540720272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:11:41.543715 containerd[1585]: time="2026-03-04T01:11:41.540798338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:11:41.543715 containerd[1585]: time="2026-03-04T01:11:41.540820519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:41.543715 containerd[1585]: time="2026-03-04T01:11:41.540954619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:41.583316 containerd[1585]: time="2026-03-04T01:11:41.582935201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5645dcc89f-zmqfq,Uid:02ad6776-d910-4946-a34e-593710285244,Namespace:calico-system,Attempt:0,} returns sandbox id \"879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54\"" Mar 4 01:11:41.640781 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:11:41.668559 containerd[1585]: time="2026-03-04T01:11:41.668314335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:11:41.668559 containerd[1585]: time="2026-03-04T01:11:41.668410685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:11:41.669182 containerd[1585]: time="2026-03-04T01:11:41.668486286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:41.669182 containerd[1585]: time="2026-03-04T01:11:41.668658937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:11:41.776163 containerd[1585]: time="2026-03-04T01:11:41.774476550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lxbhh,Uid:f044584a-0762-48e5-86e5-bdd14b596aa5,Namespace:kube-system,Attempt:0,} returns sandbox id \"60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2\"" Mar 4 01:11:41.776346 kubelet[2680]: E0304 01:11:41.775447 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:41.792382 containerd[1585]: time="2026-03-04T01:11:41.790536832Z" level=info msg="CreateContainer within sandbox \"60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:11:41.821290 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:11:41.842921 containerd[1585]: time="2026-03-04T01:11:41.842837155Z" level=info msg="CreateContainer within sandbox \"60a85bbf81510e01bbda69a0de160f3ffdc9f37e7b558a116cb0c685f0d29bf2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dc1779cb487984ef731f4cbe5bd000457917cda712224389e4ea57dfc4058e1a\"" Mar 4 01:11:41.844254 containerd[1585]: time="2026-03-04T01:11:41.844191593Z" level=info msg="StartContainer for \"dc1779cb487984ef731f4cbe5bd000457917cda712224389e4ea57dfc4058e1a\"" Mar 4 01:11:41.857557 systemd-resolved[1476]: Under memory pressure, flushing caches. Mar 4 01:11:41.858344 systemd-journald[1165]: Under memory pressure, flushing caches. Mar 4 01:11:41.857603 systemd-resolved[1476]: Flushed all caches. Mar 4 01:11:41.912297 containerd[1585]: time="2026-03-04T01:11:41.910576070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8df47bb96-l27qz,Uid:c7f256b2-92d4-4175-8d39-716937c904f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c\"" Mar 4 01:11:41.987729 containerd[1585]: time="2026-03-04T01:11:41.987571442Z" level=info msg="StartContainer for \"dc1779cb487984ef731f4cbe5bd000457917cda712224389e4ea57dfc4058e1a\" returns successfully" Mar 4 01:11:42.240288 kubelet[2680]: E0304 01:11:42.234470 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:42.241403 kubelet[2680]: E0304 01:11:42.240918 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:42.261051 systemd-networkd[1249]: vxlan.calico: Link UP Mar 4 01:11:42.262374 systemd-networkd[1249]: vxlan.calico: Gained carrier Mar 4 01:11:42.274788 kubelet[2680]: I0304 01:11:42.274739 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lxbhh" podStartSLOduration=29.274722013 podStartE2EDuration="29.274722013s" podCreationTimestamp="2026-03-04 01:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:11:42.270482875 +0000 UTC m=+33.888814367" watchObservedRunningTime="2026-03-04 01:11:42.274722013 +0000 UTC m=+33.893053486" Mar 4 01:11:42.302983 systemd-networkd[1249]: calid7b5e2c7fc4: Gained IPv6LL Mar 4 01:11:42.304695 systemd-networkd[1249]: cali760d15834aa: Gained IPv6LL Mar 4 01:11:42.546074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1704415081.mount: Deactivated successfully. Mar 4 01:11:42.878313 systemd-networkd[1249]: calidf18eaa1b6e: Gained IPv6LL Mar 4 01:11:43.045422 containerd[1585]: time="2026-03-04T01:11:43.045362777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:43.046453 containerd[1585]: time="2026-03-04T01:11:43.046389050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 4 01:11:43.047836 containerd[1585]: time="2026-03-04T01:11:43.047716412Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:43.050687 containerd[1585]: time="2026-03-04T01:11:43.050644432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:43.051323 containerd[1585]: time="2026-03-04T01:11:43.051289521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.247833346s" Mar 4 01:11:43.051366 containerd[1585]: time="2026-03-04T01:11:43.051328162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 4 01:11:43.052617 containerd[1585]: time="2026-03-04T01:11:43.052587781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 01:11:43.056623 containerd[1585]: time="2026-03-04T01:11:43.056556110Z" level=info msg="CreateContainer within sandbox \"bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 4 01:11:43.074351 containerd[1585]: time="2026-03-04T01:11:43.074268591Z" level=info msg="CreateContainer within sandbox \"bb82f90e9cbf1b1daecb8b586bc5ef724a8363495496cae8f8f280b219041479\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9a6c8e16162e2e0bd16899e7bf41249cf24ea1c5640eab105587907b8fa04e29\"" Mar 4 01:11:43.075156 containerd[1585]: time="2026-03-04T01:11:43.074973040Z" level=info msg="StartContainer for \"9a6c8e16162e2e0bd16899e7bf41249cf24ea1c5640eab105587907b8fa04e29\"" Mar 4 01:11:43.182786 containerd[1585]: time="2026-03-04T01:11:43.182506470Z" level=info msg="StartContainer for \"9a6c8e16162e2e0bd16899e7bf41249cf24ea1c5640eab105587907b8fa04e29\" returns successfully" Mar 4 01:11:43.241073 kubelet[2680]: E0304 01:11:43.240889 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:11:43.518445 systemd-networkd[1249]: vxlan.calico: Gained IPv6LL Mar 4 01:11:44.243973 kubelet[2680]: I0304 01:11:44.243323 2680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:11:44.700702 containerd[1585]: time="2026-03-04T01:11:44.700566326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:44.701417 containerd[1585]: time="2026-03-04T01:11:44.701337969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 4 01:11:44.702808 containerd[1585]: time="2026-03-04T01:11:44.702769892Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:44.706879 containerd[1585]: time="2026-03-04T01:11:44.706813888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:44.708234 containerd[1585]: time="2026-03-04T01:11:44.708182475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.655565771s" Mar 4 01:11:44.708234 containerd[1585]: time="2026-03-04T01:11:44.708232848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 4 01:11:44.714223 containerd[1585]: time="2026-03-04T01:11:44.713534887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 01:11:44.719684 containerd[1585]: time="2026-03-04T01:11:44.719504535Z" level=info msg="CreateContainer within sandbox \"e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 01:11:44.742506 containerd[1585]: time="2026-03-04T01:11:44.742379538Z" level=info msg="CreateContainer within sandbox \"e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6f092a7fd4f96e7db5279cb40db2977638b24adcb942ef6373bc49576f1082da\"" Mar 4 01:11:44.743426 containerd[1585]: time="2026-03-04T01:11:44.743381891Z" level=info msg="StartContainer for \"6f092a7fd4f96e7db5279cb40db2977638b24adcb942ef6373bc49576f1082da\"" Mar 4 01:11:44.831720 containerd[1585]: time="2026-03-04T01:11:44.830813617Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:44.832295 containerd[1585]: time="2026-03-04T01:11:44.832207307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 4 01:11:44.835892 containerd[1585]: time="2026-03-04T01:11:44.835826290Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 121.769359ms" Mar 4 01:11:44.835892 containerd[1585]: time="2026-03-04T01:11:44.835875932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 4 01:11:44.837610 containerd[1585]: time="2026-03-04T01:11:44.837363784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 4 01:11:44.844541 containerd[1585]: time="2026-03-04T01:11:44.843868207Z" level=info msg="CreateContainer within sandbox \"baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 01:11:44.870200 containerd[1585]: time="2026-03-04T01:11:44.870059649Z" level=info msg="StartContainer for \"6f092a7fd4f96e7db5279cb40db2977638b24adcb942ef6373bc49576f1082da\" returns successfully" Mar 4 01:11:44.871324 containerd[1585]: time="2026-03-04T01:11:44.870172213Z" level=info msg="CreateContainer within sandbox \"baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bea9c0ef832d3c2320056de5a00d000d3f48586941d5da206be816f98803ef25\"" Mar 4 01:11:44.871324 containerd[1585]: time="2026-03-04T01:11:44.871241381Z" level=info msg="StartContainer for \"bea9c0ef832d3c2320056de5a00d000d3f48586941d5da206be816f98803ef25\"" Mar 4 01:11:45.000632 containerd[1585]: time="2026-03-04T01:11:45.000063663Z" level=info msg="StartContainer for \"bea9c0ef832d3c2320056de5a00d000d3f48586941d5da206be816f98803ef25\" returns successfully" Mar 4 01:11:45.277139 kubelet[2680]: I0304 01:11:45.274338 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-657b845487-rwtgg" podStartSLOduration=16.840974954 podStartE2EDuration="21.274319743s" podCreationTimestamp="2026-03-04 01:11:24 +0000 UTC" firstStartedPulling="2026-03-04 01:11:40.403727928 +0000 UTC m=+32.022059400" lastFinishedPulling="2026-03-04 01:11:44.837072717 +0000 UTC m=+36.455404189" observedRunningTime="2026-03-04 01:11:45.272638931 +0000 UTC m=+36.890970413" watchObservedRunningTime="2026-03-04 01:11:45.274319743 +0000 UTC m=+36.892651215" Mar 4 01:11:45.277139 kubelet[2680]: I0304 01:11:45.274443 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-nm8vg" podStartSLOduration=17.024866508 podStartE2EDuration="20.274438915s" podCreationTimestamp="2026-03-04 01:11:25 +0000 UTC" firstStartedPulling="2026-03-04 01:11:39.802885602 +0000 UTC m=+31.421217074" lastFinishedPulling="2026-03-04 01:11:43.052457999 +0000 UTC m=+34.670789481" observedRunningTime="2026-03-04 01:11:43.255180662 +0000 UTC m=+34.873512174" watchObservedRunningTime="2026-03-04 01:11:45.274438915 +0000 UTC m=+36.892770407" Mar 4 01:11:45.549211 containerd[1585]: time="2026-03-04T01:11:45.546475933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:45.551053 containerd[1585]: time="2026-03-04T01:11:45.550947077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 4 01:11:45.556148 containerd[1585]: time="2026-03-04T01:11:45.554916555Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:45.563990 containerd[1585]: time="2026-03-04T01:11:45.563886113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:45.567102 containerd[1585]: time="2026-03-04T01:11:45.566302366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 728.898216ms" Mar 4 01:11:45.567102 containerd[1585]: time="2026-03-04T01:11:45.566355024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 4 01:11:45.569372 containerd[1585]: time="2026-03-04T01:11:45.569300775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 4 01:11:45.576256 containerd[1585]: time="2026-03-04T01:11:45.576139091Z" level=info msg="CreateContainer within sandbox \"8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 4 01:11:45.608344 containerd[1585]: time="2026-03-04T01:11:45.608290021Z" level=info msg="CreateContainer within sandbox \"8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bee982f89e99a248367bbd1970e685080527f078695825af3881fb3ec38bfa7b\"" Mar 4 01:11:45.611136 containerd[1585]: time="2026-03-04T01:11:45.609453954Z" level=info msg="StartContainer for \"bee982f89e99a248367bbd1970e685080527f078695825af3881fb3ec38bfa7b\"" Mar 4 01:11:45.747588 containerd[1585]: time="2026-03-04T01:11:45.747492734Z" level=info msg="StartContainer for \"bee982f89e99a248367bbd1970e685080527f078695825af3881fb3ec38bfa7b\" returns successfully" Mar 4 01:11:46.260828 kubelet[2680]: I0304 01:11:46.260307 2680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:11:46.260828 kubelet[2680]: I0304 01:11:46.260334 2680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:11:47.444142 containerd[1585]: time="2026-03-04T01:11:47.442824366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:47.444795 containerd[1585]: time="2026-03-04T01:11:47.444552030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 4 01:11:47.446481 containerd[1585]: time="2026-03-04T01:11:47.446423906Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:47.466222 containerd[1585]: time="2026-03-04T01:11:47.466149739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:47.468506 containerd[1585]: time="2026-03-04T01:11:47.468429779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.8990568s" Mar 4 01:11:47.468506 containerd[1585]: time="2026-03-04T01:11:47.468474813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 4 01:11:47.470711 containerd[1585]: time="2026-03-04T01:11:47.470625511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 4 01:11:47.489583 containerd[1585]: time="2026-03-04T01:11:47.489539879Z" level=info msg="CreateContainer within sandbox \"879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 4 01:11:47.510202 containerd[1585]: time="2026-03-04T01:11:47.510143904Z" level=info msg="CreateContainer within sandbox \"879c42238b49ebb7739b9e1d9e94717815c0a55993fed859451c0361975e6d54\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b79faa4b4a4c83c2ead029340e7179fdd0355067df81aac9d6398b1e1ba11def\"" Mar 4 01:11:47.511146 containerd[1585]: time="2026-03-04T01:11:47.510990374Z" level=info msg="StartContainer for \"b79faa4b4a4c83c2ead029340e7179fdd0355067df81aac9d6398b1e1ba11def\"" Mar 4 01:11:47.666604 containerd[1585]: time="2026-03-04T01:11:47.666498051Z" level=info msg="StartContainer for \"b79faa4b4a4c83c2ead029340e7179fdd0355067df81aac9d6398b1e1ba11def\" returns successfully" Mar 4 01:11:48.104619 containerd[1585]: time="2026-03-04T01:11:48.104528342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:48.108272 containerd[1585]: time="2026-03-04T01:11:48.108178817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 4 01:11:48.109201 containerd[1585]: time="2026-03-04T01:11:48.109145354Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:48.112730 containerd[1585]: time="2026-03-04T01:11:48.112663178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:48.113893 containerd[1585]: time="2026-03-04T01:11:48.113812455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 643.114821ms" Mar 4 01:11:48.113893 containerd[1585]: time="2026-03-04T01:11:48.113876715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 4 01:11:48.115932 containerd[1585]: time="2026-03-04T01:11:48.115804892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 4 01:11:48.120620 containerd[1585]: time="2026-03-04T01:11:48.120582311Z" level=info msg="CreateContainer within sandbox \"71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 4 01:11:48.141609 containerd[1585]: time="2026-03-04T01:11:48.141478956Z" level=info msg="CreateContainer within sandbox \"71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"46f86a93b20e8cdc145a026db5a03e9b86199d9708b8822913d861425f59550a\"" Mar 4 01:11:48.144515 containerd[1585]: time="2026-03-04T01:11:48.144248363Z" level=info msg="StartContainer for \"46f86a93b20e8cdc145a026db5a03e9b86199d9708b8822913d861425f59550a\"" Mar 4 01:11:48.267329 containerd[1585]: time="2026-03-04T01:11:48.267253658Z" level=info msg="StartContainer for \"46f86a93b20e8cdc145a026db5a03e9b86199d9708b8822913d861425f59550a\" returns successfully" Mar 4 01:11:48.295879 kubelet[2680]: I0304 01:11:48.295641 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-657b845487-f9ph9" podStartSLOduration=19.716689356 podStartE2EDuration="24.295623579s" podCreationTimestamp="2026-03-04 01:11:24 +0000 UTC" firstStartedPulling="2026-03-04 01:11:40.133467278 +0000 UTC m=+31.751798751" lastFinishedPulling="2026-03-04 01:11:44.712401501 +0000 UTC m=+36.330732974" observedRunningTime="2026-03-04 01:11:45.303659408 +0000 UTC m=+36.921990930" watchObservedRunningTime="2026-03-04 01:11:48.295623579 +0000 UTC m=+39.913955061" Mar 4 01:11:48.296643 kubelet[2680]: I0304 01:11:48.295938 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5645dcc89f-zmqfq" podStartSLOduration=17.412255443 podStartE2EDuration="23.295930591s" podCreationTimestamp="2026-03-04 01:11:25 +0000 UTC" firstStartedPulling="2026-03-04 01:11:41.585853218 +0000 UTC m=+33.204184690" lastFinishedPulling="2026-03-04 01:11:47.469528325 +0000 UTC m=+39.087859838" observedRunningTime="2026-03-04 01:11:48.294514047 +0000 UTC m=+39.912845519" watchObservedRunningTime="2026-03-04 01:11:48.295930591 +0000 UTC m=+39.914262063" Mar 4 01:11:48.485062 systemd[1]: run-containerd-runc-k8s.io-b79faa4b4a4c83c2ead029340e7179fdd0355067df81aac9d6398b1e1ba11def-runc.XMd6nt.mount: Deactivated successfully. Mar 4 01:11:48.880935 containerd[1585]: time="2026-03-04T01:11:48.880866413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:48.882010 containerd[1585]: time="2026-03-04T01:11:48.881949049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 4 01:11:48.883264 containerd[1585]: time="2026-03-04T01:11:48.883202177Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:48.887455 containerd[1585]: time="2026-03-04T01:11:48.887365650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:48.888508 containerd[1585]: time="2026-03-04T01:11:48.888420109Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 772.558031ms" Mar 4 01:11:48.888508 containerd[1585]: time="2026-03-04T01:11:48.888465203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 4 01:11:48.889768 containerd[1585]: time="2026-03-04T01:11:48.889671738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 4 01:11:48.894445 containerd[1585]: time="2026-03-04T01:11:48.893980577Z" level=info msg="CreateContainer within sandbox \"8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 4 01:11:48.915297 containerd[1585]: time="2026-03-04T01:11:48.914846232Z" level=info msg="CreateContainer within sandbox \"8700c2f953304e03b7cced1e45b0b36d837efc2d51c153150d3026ebd67a085b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d33076692df6c9a164391a56b4e3b01827af76bd5c165436d6e5d1abe14ac92b\"" Mar 4 01:11:48.916517 containerd[1585]: time="2026-03-04T01:11:48.916005944Z" level=info msg="StartContainer for \"d33076692df6c9a164391a56b4e3b01827af76bd5c165436d6e5d1abe14ac92b\"" Mar 4 01:11:49.034177 containerd[1585]: time="2026-03-04T01:11:49.033994842Z" level=info msg="StartContainer for \"d33076692df6c9a164391a56b4e3b01827af76bd5c165436d6e5d1abe14ac92b\" returns successfully" Mar 4 01:11:49.055701 kubelet[2680]: I0304 01:11:49.055673 2680 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 4 01:11:49.057764 kubelet[2680]: I0304 01:11:49.057444 2680 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 4 01:11:49.282235 kubelet[2680]: I0304 01:11:49.282133 2680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:11:50.039462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435648048.mount: Deactivated successfully. Mar 4 01:11:50.081416 containerd[1585]: time="2026-03-04T01:11:50.081207290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:50.084750 containerd[1585]: time="2026-03-04T01:11:50.084655469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 4 01:11:50.086639 containerd[1585]: time="2026-03-04T01:11:50.086585685Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:50.093833 containerd[1585]: time="2026-03-04T01:11:50.093760005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:11:50.096365 containerd[1585]: time="2026-03-04T01:11:50.095581972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.205846735s" Mar 4 01:11:50.096365 containerd[1585]: time="2026-03-04T01:11:50.095634380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 4 01:11:50.115521 containerd[1585]: time="2026-03-04T01:11:50.114928502Z" level=info msg="CreateContainer within sandbox \"71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 4 01:11:50.144824 containerd[1585]: time="2026-03-04T01:11:50.144722355Z" level=info msg="CreateContainer within sandbox \"71006963f66e1b1512f1ca44534e8ffb23ef00ed7815d37aca70ce540436d20c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"54fb5244f070da67647ed84f6ee7cf516d7fda8dcdad4635430c55f722c98a1b\"" Mar 4 01:11:50.146349 containerd[1585]: time="2026-03-04T01:11:50.146264333Z" level=info msg="StartContainer for \"54fb5244f070da67647ed84f6ee7cf516d7fda8dcdad4635430c55f722c98a1b\"" Mar 4 01:11:50.295177 containerd[1585]: time="2026-03-04T01:11:50.294547763Z" level=info msg="StartContainer for \"54fb5244f070da67647ed84f6ee7cf516d7fda8dcdad4635430c55f722c98a1b\" returns successfully" Mar 4 01:11:51.343236 kubelet[2680]: I0304 01:11:51.340200 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4rchx" podStartSLOduration=18.248767429 podStartE2EDuration="26.340174857s" podCreationTimestamp="2026-03-04 01:11:25 +0000 UTC" firstStartedPulling="2026-03-04 01:11:40.798119786 +0000 UTC m=+32.416451258" lastFinishedPulling="2026-03-04 01:11:48.889527214 +0000 UTC m=+40.507858686" observedRunningTime="2026-03-04 01:11:49.304261569 +0000 UTC m=+40.922593061" watchObservedRunningTime="2026-03-04 01:11:51.340174857 +0000 UTC m=+42.958506389" Mar 4 01:11:51.477907 kubelet[2680]: I0304 01:11:51.477868 2680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:11:51.588594 kubelet[2680]: I0304 01:11:51.586580 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8df47bb96-l27qz" podStartSLOduration=3.405456627 podStartE2EDuration="11.586561773s" podCreationTimestamp="2026-03-04 01:11:40 +0000 UTC" firstStartedPulling="2026-03-04 01:11:41.916541565 +0000 UTC m=+33.534873048" lastFinishedPulling="2026-03-04 01:11:50.097646722 +0000 UTC m=+41.715978194" observedRunningTime="2026-03-04 01:11:51.343822379 +0000 UTC m=+42.962153852" watchObservedRunningTime="2026-03-04 01:11:51.586561773 +0000 UTC m=+43.204893245" Mar 4 01:12:01.536586 kubelet[2680]: I0304 01:12:01.535262 2680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:12:04.775686 systemd[1]: Started sshd@7-10.0.0.98:22-10.0.0.1:40980.service - OpenSSH per-connection server daemon (10.0.0.1:40980). Mar 4 01:12:04.876133 sshd[5444]: Accepted publickey for core from 10.0.0.1 port 40980 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:12:04.879982 sshd[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:12:04.896228 systemd-logind[1558]: New session 8 of user core. Mar 4 01:12:04.901935 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 4 01:12:05.483656 sshd[5444]: pam_unix(sshd:session): session closed for user core Mar 4 01:12:05.489956 systemd[1]: sshd@7-10.0.0.98:22-10.0.0.1:40980.service: Deactivated successfully. Mar 4 01:12:05.496722 systemd-logind[1558]: Session 8 logged out. Waiting for processes to exit. Mar 4 01:12:05.497818 systemd[1]: session-8.scope: Deactivated successfully. Mar 4 01:12:05.500219 systemd-logind[1558]: Removed session 8. Mar 4 01:12:06.478348 kubelet[2680]: I0304 01:12:06.477642 2680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:12:08.630762 containerd[1585]: time="2026-03-04T01:12:08.630677894Z" level=info msg="StopPodSandbox for \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\"" Mar 4 01:12:08.830254 containerd[1585]: 2026-03-04 01:12:08.727 [WARNING][5475] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" WorkloadEndpoint="localhost-k8s-whisker--64c8545f95--vl8pd-eth0" Mar 4 01:12:08.830254 containerd[1585]: 2026-03-04 01:12:08.727 [INFO][5475] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Mar 4 01:12:08.830254 containerd[1585]: 2026-03-04 01:12:08.727 [INFO][5475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" iface="eth0" netns="" Mar 4 01:12:08.830254 containerd[1585]: 2026-03-04 01:12:08.727 [INFO][5475] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Mar 4 01:12:08.830254 containerd[1585]: 2026-03-04 01:12:08.727 [INFO][5475] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Mar 4 01:12:08.830254 containerd[1585]: 2026-03-04 01:12:08.793 [INFO][5484] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" HandleID="k8s-pod-network.763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Workload="localhost-k8s-whisker--64c8545f95--vl8pd-eth0" Mar 4 01:12:08.830254 containerd[1585]: 2026-03-04 01:12:08.794 [INFO][5484] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:12:08.830254 containerd[1585]: 2026-03-04 01:12:08.794 [INFO][5484] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:12:08.830254 containerd[1585]: 2026-03-04 01:12:08.810 [WARNING][5484] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" HandleID="k8s-pod-network.763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Workload="localhost-k8s-whisker--64c8545f95--vl8pd-eth0" Mar 4 01:12:08.830254 containerd[1585]: 2026-03-04 01:12:08.811 [INFO][5484] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" HandleID="k8s-pod-network.763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Workload="localhost-k8s-whisker--64c8545f95--vl8pd-eth0" Mar 4 01:12:08.830254 containerd[1585]: 2026-03-04 01:12:08.819 [INFO][5484] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:12:08.830254 containerd[1585]: 2026-03-04 01:12:08.824 [INFO][5475] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Mar 4 01:12:08.850607 containerd[1585]: time="2026-03-04T01:12:08.850484321Z" level=info msg="TearDown network for sandbox \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\" successfully" Mar 4 01:12:08.850607 containerd[1585]: time="2026-03-04T01:12:08.850571523Z" level=info msg="StopPodSandbox for \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\" returns successfully" Mar 4 01:12:08.914234 containerd[1585]: time="2026-03-04T01:12:08.913967226Z" level=info msg="RemovePodSandbox for \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\"" Mar 4 01:12:08.917227 containerd[1585]: time="2026-03-04T01:12:08.917051758Z" level=info msg="Forcibly stopping sandbox \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\"" Mar 4 01:12:09.052639 containerd[1585]: 2026-03-04 01:12:08.980 [WARNING][5501] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" WorkloadEndpoint="localhost-k8s-whisker--64c8545f95--vl8pd-eth0" Mar 4 01:12:09.052639 containerd[1585]: 2026-03-04 01:12:08.980 [INFO][5501] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Mar 4 01:12:09.052639 containerd[1585]: 2026-03-04 01:12:08.980 [INFO][5501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" iface="eth0" netns="" Mar 4 01:12:09.052639 containerd[1585]: 2026-03-04 01:12:08.980 [INFO][5501] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Mar 4 01:12:09.052639 containerd[1585]: 2026-03-04 01:12:08.980 [INFO][5501] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Mar 4 01:12:09.052639 containerd[1585]: 2026-03-04 01:12:09.025 [INFO][5509] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" HandleID="k8s-pod-network.763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Workload="localhost-k8s-whisker--64c8545f95--vl8pd-eth0" Mar 4 01:12:09.052639 containerd[1585]: 2026-03-04 01:12:09.025 [INFO][5509] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:12:09.052639 containerd[1585]: 2026-03-04 01:12:09.025 [INFO][5509] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:12:09.052639 containerd[1585]: 2026-03-04 01:12:09.036 [WARNING][5509] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" HandleID="k8s-pod-network.763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Workload="localhost-k8s-whisker--64c8545f95--vl8pd-eth0" Mar 4 01:12:09.052639 containerd[1585]: 2026-03-04 01:12:09.036 [INFO][5509] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" HandleID="k8s-pod-network.763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Workload="localhost-k8s-whisker--64c8545f95--vl8pd-eth0" Mar 4 01:12:09.052639 containerd[1585]: 2026-03-04 01:12:09.043 [INFO][5509] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:12:09.052639 containerd[1585]: 2026-03-04 01:12:09.047 [INFO][5501] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5" Mar 4 01:12:09.053356 containerd[1585]: time="2026-03-04T01:12:09.052695086Z" level=info msg="TearDown network for sandbox \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\" successfully" Mar 4 01:12:09.082785 containerd[1585]: time="2026-03-04T01:12:09.082636936Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:12:09.082944 containerd[1585]: time="2026-03-04T01:12:09.082820998Z" level=info msg="RemovePodSandbox \"763c598760efe60a0a293a2a41a241ccc146db5a098f0881e56a741db0c200b5\" returns successfully" Mar 4 01:12:09.092505 containerd[1585]: time="2026-03-04T01:12:09.092245970Z" level=info msg="StopPodSandbox for \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\"" Mar 4 01:12:09.245331 containerd[1585]: 2026-03-04 01:12:09.168 [WARNING][5527] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0", GenerateName:"calico-apiserver-657b845487-", Namespace:"calico-system", SelfLink:"", UID:"2af07391-9b64-40af-835a-1d50f76831e1", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657b845487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391", Pod:"calico-apiserver-657b845487-f9ph9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6ca0aba22ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:12:09.245331 containerd[1585]: 2026-03-04 01:12:09.169 [INFO][5527] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Mar 4 01:12:09.245331 containerd[1585]: 2026-03-04 01:12:09.169 [INFO][5527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" iface="eth0" netns="" Mar 4 01:12:09.245331 containerd[1585]: 2026-03-04 01:12:09.169 [INFO][5527] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Mar 4 01:12:09.245331 containerd[1585]: 2026-03-04 01:12:09.169 [INFO][5527] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Mar 4 01:12:09.245331 containerd[1585]: 2026-03-04 01:12:09.213 [INFO][5536] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" HandleID="k8s-pod-network.97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Workload="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:12:09.245331 containerd[1585]: 2026-03-04 01:12:09.214 [INFO][5536] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:12:09.245331 containerd[1585]: 2026-03-04 01:12:09.214 [INFO][5536] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:12:09.245331 containerd[1585]: 2026-03-04 01:12:09.227 [WARNING][5536] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" HandleID="k8s-pod-network.97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Workload="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:12:09.245331 containerd[1585]: 2026-03-04 01:12:09.230 [INFO][5536] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" HandleID="k8s-pod-network.97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Workload="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:12:09.245331 containerd[1585]: 2026-03-04 01:12:09.234 [INFO][5536] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:12:09.245331 containerd[1585]: 2026-03-04 01:12:09.240 [INFO][5527] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Mar 4 01:12:09.245331 containerd[1585]: time="2026-03-04T01:12:09.244931857Z" level=info msg="TearDown network for sandbox \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\" successfully" Mar 4 01:12:09.245331 containerd[1585]: time="2026-03-04T01:12:09.244970488Z" level=info msg="StopPodSandbox for \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\" returns successfully" Mar 4 01:12:09.247288 containerd[1585]: time="2026-03-04T01:12:09.246560801Z" level=info msg="RemovePodSandbox for \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\"" Mar 4 01:12:09.247288 containerd[1585]: time="2026-03-04T01:12:09.246659335Z" level=info msg="Forcibly stopping sandbox \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\"" Mar 4 01:12:09.423130 containerd[1585]: 2026-03-04 01:12:09.333 [WARNING][5553] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0", GenerateName:"calico-apiserver-657b845487-", Namespace:"calico-system", SelfLink:"", UID:"2af07391-9b64-40af-835a-1d50f76831e1", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657b845487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e28da180a28ab37eeee607cb992edd28d29aa2cee418dd9847808406d6b8c391", Pod:"calico-apiserver-657b845487-f9ph9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6ca0aba22ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:12:09.423130 containerd[1585]: 2026-03-04 01:12:09.333 [INFO][5553] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Mar 4 01:12:09.423130 containerd[1585]: 2026-03-04 01:12:09.333 [INFO][5553] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" iface="eth0" netns="" Mar 4 01:12:09.423130 containerd[1585]: 2026-03-04 01:12:09.333 [INFO][5553] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Mar 4 01:12:09.423130 containerd[1585]: 2026-03-04 01:12:09.333 [INFO][5553] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Mar 4 01:12:09.423130 containerd[1585]: 2026-03-04 01:12:09.388 [INFO][5561] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" HandleID="k8s-pod-network.97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Workload="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:12:09.423130 containerd[1585]: 2026-03-04 01:12:09.389 [INFO][5561] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:12:09.423130 containerd[1585]: 2026-03-04 01:12:09.389 [INFO][5561] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:12:09.423130 containerd[1585]: 2026-03-04 01:12:09.406 [WARNING][5561] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" HandleID="k8s-pod-network.97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Workload="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:12:09.423130 containerd[1585]: 2026-03-04 01:12:09.407 [INFO][5561] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" HandleID="k8s-pod-network.97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Workload="localhost-k8s-calico--apiserver--657b845487--f9ph9-eth0" Mar 4 01:12:09.423130 containerd[1585]: 2026-03-04 01:12:09.412 [INFO][5561] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:12:09.423130 containerd[1585]: 2026-03-04 01:12:09.417 [INFO][5553] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c" Mar 4 01:12:09.427544 containerd[1585]: time="2026-03-04T01:12:09.423925655Z" level=info msg="TearDown network for sandbox \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\" successfully" Mar 4 01:12:09.434314 containerd[1585]: time="2026-03-04T01:12:09.434046346Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:12:09.434314 containerd[1585]: time="2026-03-04T01:12:09.434215461Z" level=info msg="RemovePodSandbox \"97892f09e7ba7ab5b4860976504c2e7e9b5794038817f5adf8b3e3ec1500860c\" returns successfully" Mar 4 01:12:09.434785 containerd[1585]: time="2026-03-04T01:12:09.434608220Z" level=info msg="StopPodSandbox for \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\"" Mar 4 01:12:09.608329 containerd[1585]: 2026-03-04 01:12:09.517 [WARNING][5577] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0", GenerateName:"calico-apiserver-657b845487-", Namespace:"calico-system", SelfLink:"", UID:"03c35491-5bc5-4312-a5b8-da3b8fa8bdbc", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657b845487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1", Pod:"calico-apiserver-657b845487-rwtgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali94718c5ab39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:12:09.608329 containerd[1585]: 2026-03-04 01:12:09.517 [INFO][5577] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Mar 4 01:12:09.608329 containerd[1585]: 2026-03-04 01:12:09.518 [INFO][5577] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" iface="eth0" netns="" Mar 4 01:12:09.608329 containerd[1585]: 2026-03-04 01:12:09.518 [INFO][5577] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Mar 4 01:12:09.608329 containerd[1585]: 2026-03-04 01:12:09.518 [INFO][5577] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Mar 4 01:12:09.608329 containerd[1585]: 2026-03-04 01:12:09.567 [INFO][5586] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" HandleID="k8s-pod-network.139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Workload="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:12:09.608329 containerd[1585]: 2026-03-04 01:12:09.568 [INFO][5586] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:12:09.608329 containerd[1585]: 2026-03-04 01:12:09.568 [INFO][5586] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:12:09.608329 containerd[1585]: 2026-03-04 01:12:09.581 [WARNING][5586] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" HandleID="k8s-pod-network.139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Workload="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:12:09.608329 containerd[1585]: 2026-03-04 01:12:09.581 [INFO][5586] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" HandleID="k8s-pod-network.139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Workload="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:12:09.608329 containerd[1585]: 2026-03-04 01:12:09.585 [INFO][5586] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:12:09.608329 containerd[1585]: 2026-03-04 01:12:09.590 [INFO][5577] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Mar 4 01:12:09.608329 containerd[1585]: time="2026-03-04T01:12:09.608312810Z" level=info msg="TearDown network for sandbox \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\" successfully" Mar 4 01:12:09.609061 containerd[1585]: time="2026-03-04T01:12:09.608347014Z" level=info msg="StopPodSandbox for \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\" returns successfully" Mar 4 01:12:09.609061 containerd[1585]: time="2026-03-04T01:12:09.608992453Z" level=info msg="RemovePodSandbox for \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\"" Mar 4 01:12:09.609061 containerd[1585]: time="2026-03-04T01:12:09.609021196Z" level=info msg="Forcibly stopping sandbox \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\"" Mar 4 01:12:09.788932 containerd[1585]: 2026-03-04 01:12:09.706 [WARNING][5603] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0", GenerateName:"calico-apiserver-657b845487-", Namespace:"calico-system", SelfLink:"", UID:"03c35491-5bc5-4312-a5b8-da3b8fa8bdbc", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 11, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657b845487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"baa37561fbb02c515db0abcbdd442790c30fde66dec16f63b4a4355ed2d561a1", Pod:"calico-apiserver-657b845487-rwtgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali94718c5ab39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:12:09.788932 containerd[1585]: 2026-03-04 01:12:09.706 [INFO][5603] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Mar 4 01:12:09.788932 containerd[1585]: 2026-03-04 01:12:09.706 [INFO][5603] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" iface="eth0" netns="" Mar 4 01:12:09.788932 containerd[1585]: 2026-03-04 01:12:09.707 [INFO][5603] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Mar 4 01:12:09.788932 containerd[1585]: 2026-03-04 01:12:09.707 [INFO][5603] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Mar 4 01:12:09.788932 containerd[1585]: 2026-03-04 01:12:09.765 [INFO][5611] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" HandleID="k8s-pod-network.139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Workload="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:12:09.788932 containerd[1585]: 2026-03-04 01:12:09.765 [INFO][5611] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:12:09.788932 containerd[1585]: 2026-03-04 01:12:09.765 [INFO][5611] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:12:09.788932 containerd[1585]: 2026-03-04 01:12:09.776 [WARNING][5611] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" HandleID="k8s-pod-network.139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Workload="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:12:09.788932 containerd[1585]: 2026-03-04 01:12:09.776 [INFO][5611] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" HandleID="k8s-pod-network.139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Workload="localhost-k8s-calico--apiserver--657b845487--rwtgg-eth0" Mar 4 01:12:09.788932 containerd[1585]: 2026-03-04 01:12:09.780 [INFO][5611] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:12:09.788932 containerd[1585]: 2026-03-04 01:12:09.784 [INFO][5603] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680" Mar 4 01:12:09.790173 containerd[1585]: time="2026-03-04T01:12:09.788981568Z" level=info msg="TearDown network for sandbox \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\" successfully" Mar 4 01:12:09.797706 containerd[1585]: time="2026-03-04T01:12:09.796919908Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:12:09.797706 containerd[1585]: time="2026-03-04T01:12:09.797073894Z" level=info msg="RemovePodSandbox \"139f239d0143e298ee4d7ce9e4e561fb3bfed461450675f5c42a0608b3d15680\" returns successfully" Mar 4 01:12:10.509947 systemd[1]: Started sshd@8-10.0.0.98:22-10.0.0.1:45376.service - OpenSSH per-connection server daemon (10.0.0.1:45376). Mar 4 01:12:10.604878 sshd[5618]: Accepted publickey for core from 10.0.0.1 port 45376 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:12:10.608542 sshd[5618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:12:10.617647 systemd-logind[1558]: New session 9 of user core. Mar 4 01:12:10.623583 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 4 01:12:10.871461 sshd[5618]: pam_unix(sshd:session): session closed for user core Mar 4 01:12:10.877376 systemd[1]: sshd@8-10.0.0.98:22-10.0.0.1:45376.service: Deactivated successfully. Mar 4 01:12:10.883851 systemd-logind[1558]: Session 9 logged out. Waiting for processes to exit. Mar 4 01:12:10.884002 systemd[1]: session-9.scope: Deactivated successfully. Mar 4 01:12:10.886322 systemd-logind[1558]: Removed session 9. Mar 4 01:12:12.123289 kubelet[2680]: I0304 01:12:12.123177 2680 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:12:15.901374 systemd[1]: Started sshd@9-10.0.0.98:22-10.0.0.1:45380.service - OpenSSH per-connection server daemon (10.0.0.1:45380). Mar 4 01:12:15.960603 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 45380 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:12:15.962899 sshd[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:12:15.969792 systemd-logind[1558]: New session 10 of user core. Mar 4 01:12:15.975796 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 4 01:12:16.154965 sshd[5664]: pam_unix(sshd:session): session closed for user core Mar 4 01:12:16.160324 systemd[1]: sshd@9-10.0.0.98:22-10.0.0.1:45380.service: Deactivated successfully. Mar 4 01:12:16.164231 systemd[1]: session-10.scope: Deactivated successfully. Mar 4 01:12:16.166071 systemd-logind[1558]: Session 10 logged out. Waiting for processes to exit. Mar 4 01:12:16.168373 systemd-logind[1558]: Removed session 10. Mar 4 01:12:19.359356 kubelet[2680]: E0304 01:12:19.357724 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:12:21.169607 systemd[1]: Started sshd@10-10.0.0.98:22-10.0.0.1:41818.service - OpenSSH per-connection server daemon (10.0.0.1:41818). Mar 4 01:12:21.219469 sshd[5681]: Accepted publickey for core from 10.0.0.1 port 41818 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:12:21.221883 sshd[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:12:21.228881 systemd-logind[1558]: New session 11 of user core. Mar 4 01:12:21.238585 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 4 01:12:21.421381 sshd[5681]: pam_unix(sshd:session): session closed for user core Mar 4 01:12:21.424867 systemd[1]: sshd@10-10.0.0.98:22-10.0.0.1:41818.service: Deactivated successfully. Mar 4 01:12:21.428996 systemd[1]: session-11.scope: Deactivated successfully. Mar 4 01:12:21.431176 systemd-logind[1558]: Session 11 logged out. Waiting for processes to exit. Mar 4 01:12:21.432793 systemd-logind[1558]: Removed session 11. Mar 4 01:12:26.434533 systemd[1]: Started sshd@11-10.0.0.98:22-10.0.0.1:41824.service - OpenSSH per-connection server daemon (10.0.0.1:41824). Mar 4 01:12:26.496738 sshd[5738]: Accepted publickey for core from 10.0.0.1 port 41824 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:12:26.499894 sshd[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:12:26.516145 systemd-logind[1558]: New session 12 of user core. Mar 4 01:12:26.523565 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 4 01:12:26.718745 sshd[5738]: pam_unix(sshd:session): session closed for user core Mar 4 01:12:26.723471 systemd[1]: sshd@11-10.0.0.98:22-10.0.0.1:41824.service: Deactivated successfully. Mar 4 01:12:26.727258 systemd-logind[1558]: Session 12 logged out. Waiting for processes to exit. Mar 4 01:12:26.727333 systemd[1]: session-12.scope: Deactivated successfully. Mar 4 01:12:26.729859 systemd-logind[1558]: Removed session 12. Mar 4 01:12:31.729444 systemd[1]: Started sshd@12-10.0.0.98:22-10.0.0.1:49364.service - OpenSSH per-connection server daemon (10.0.0.1:49364). Mar 4 01:12:31.778994 sshd[5755]: Accepted publickey for core from 10.0.0.1 port 49364 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:12:31.781181 sshd[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:12:31.786402 systemd-logind[1558]: New session 13 of user core. Mar 4 01:12:31.796724 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 4 01:12:31.909045 systemd[1]: run-containerd-runc-k8s.io-9a6c8e16162e2e0bd16899e7bf41249cf24ea1c5640eab105587907b8fa04e29-runc.OUOikw.mount: Deactivated successfully. Mar 4 01:12:31.947054 sshd[5755]: pam_unix(sshd:session): session closed for user core Mar 4 01:12:31.951412 systemd[1]: sshd@12-10.0.0.98:22-10.0.0.1:49364.service: Deactivated successfully. Mar 4 01:12:31.955157 systemd-logind[1558]: Session 13 logged out. Waiting for processes to exit. Mar 4 01:12:31.955441 systemd[1]: session-13.scope: Deactivated successfully. Mar 4 01:12:31.957909 systemd-logind[1558]: Removed session 13. Mar 4 01:12:33.624381 kubelet[2680]: E0304 01:12:33.624310 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:12:35.624434 kubelet[2680]: E0304 01:12:35.624280 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:12:36.625526 kubelet[2680]: E0304 01:12:36.625452 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:12:36.962583 systemd[1]: Started sshd@13-10.0.0.98:22-10.0.0.1:49380.service - OpenSSH per-connection server daemon (10.0.0.1:49380). Mar 4 01:12:37.019666 sshd[5816]: Accepted publickey for core from 10.0.0.1 port 49380 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:12:37.021938 sshd[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:12:37.033937 systemd-logind[1558]: New session 14 of user core. Mar 4 01:12:37.041762 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 4 01:12:37.225778 sshd[5816]: pam_unix(sshd:session): session closed for user core Mar 4 01:12:37.232275 systemd[1]: sshd@13-10.0.0.98:22-10.0.0.1:49380.service: Deactivated successfully. Mar 4 01:12:37.235543 systemd-logind[1558]: Session 14 logged out. Waiting for processes to exit. Mar 4 01:12:37.236387 systemd[1]: session-14.scope: Deactivated successfully. Mar 4 01:12:37.238218 systemd-logind[1558]: Removed session 14. Mar 4 01:12:42.253913 systemd[1]: Started sshd@14-10.0.0.98:22-10.0.0.1:48666.service - OpenSSH per-connection server daemon (10.0.0.1:48666). Mar 4 01:12:42.383968 sshd[5856]: Accepted publickey for core from 10.0.0.1 port 48666 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:12:42.399435 sshd[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:12:42.421456 systemd-logind[1558]: New session 15 of user core. Mar 4 01:12:42.435884 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 4 01:12:42.921915 sshd[5856]: pam_unix(sshd:session): session closed for user core Mar 4 01:12:42.930713 systemd[1]: sshd@14-10.0.0.98:22-10.0.0.1:48666.service: Deactivated successfully. Mar 4 01:12:42.940190 systemd-logind[1558]: Session 15 logged out. Waiting for processes to exit. Mar 4 01:12:42.940413 systemd[1]: session-15.scope: Deactivated successfully. Mar 4 01:12:42.944837 systemd-logind[1558]: Removed session 15. Mar 4 01:12:43.624807 kubelet[2680]: E0304 01:12:43.624675 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:12:47.931546 systemd[1]: Started sshd@15-10.0.0.98:22-10.0.0.1:48672.service - OpenSSH per-connection server daemon (10.0.0.1:48672). Mar 4 01:12:47.981796 sshd[5891]: Accepted publickey for core from 10.0.0.1 port 48672 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:12:47.984562 sshd[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:12:47.992506 systemd-logind[1558]: New session 16 of user core. Mar 4 01:12:47.998834 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 4 01:12:48.246350 sshd[5891]: pam_unix(sshd:session): session closed for user core Mar 4 01:12:48.252597 systemd[1]: Started sshd@16-10.0.0.98:22-10.0.0.1:48688.service - OpenSSH per-connection server daemon (10.0.0.1:48688). Mar 4 01:12:48.253856 systemd[1]: sshd@15-10.0.0.98:22-10.0.0.1:48672.service: Deactivated successfully. Mar 4 01:12:48.258654 systemd-logind[1558]: Session 16 logged out. Waiting for processes to exit. Mar 4 01:12:48.267422 systemd[1]: session-16.scope: Deactivated successfully. Mar 4 01:12:48.269134 systemd-logind[1558]: Removed session 16. Mar 4 01:12:48.336339 sshd[5906]: Accepted publickey for core from 10.0.0.1 port 48688 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:12:48.338386 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:12:48.346956 systemd-logind[1558]: New session 17 of user core. Mar 4 01:12:48.348943 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 4 01:12:48.585016 sshd[5906]: pam_unix(sshd:session): session closed for user core Mar 4 01:12:48.600190 systemd[1]: Started sshd@17-10.0.0.98:22-10.0.0.1:48700.service - OpenSSH per-connection server daemon (10.0.0.1:48700). Mar 4 01:12:48.603328 systemd[1]: sshd@16-10.0.0.98:22-10.0.0.1:48688.service: Deactivated successfully. Mar 4 01:12:48.617526 systemd[1]: session-17.scope: Deactivated successfully. Mar 4 01:12:48.620768 systemd-logind[1558]: Session 17 logged out. Waiting for processes to exit. Mar 4 01:12:48.623919 systemd-logind[1558]: Removed session 17. Mar 4 01:12:48.658495 sshd[5940]: Accepted publickey for core from 10.0.0.1 port 48700 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:12:48.661213 sshd[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:12:48.672004 systemd-logind[1558]: New session 18 of user core. Mar 4 01:12:48.679674 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 4 01:12:48.833485 sshd[5940]: pam_unix(sshd:session): session closed for user core Mar 4 01:12:48.837496 systemd[1]: sshd@17-10.0.0.98:22-10.0.0.1:48700.service: Deactivated successfully. Mar 4 01:12:48.841354 systemd-logind[1558]: Session 18 logged out. Waiting for processes to exit. Mar 4 01:12:48.841582 systemd[1]: session-18.scope: Deactivated successfully. Mar 4 01:12:48.844054 systemd-logind[1558]: Removed session 18. Mar 4 01:12:51.626289 kubelet[2680]: E0304 01:12:51.624357 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:12:53.860670 systemd[1]: Started sshd@18-10.0.0.98:22-10.0.0.1:55844.service - OpenSSH per-connection server daemon (10.0.0.1:55844). Mar 4 01:12:53.915166 sshd[5981]: Accepted publickey for core from 10.0.0.1 port 55844 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:12:53.918562 sshd[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:12:53.930384 systemd-logind[1558]: New session 19 of user core. Mar 4 01:12:53.940767 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 4 01:12:54.185812 sshd[5981]: pam_unix(sshd:session): session closed for user core Mar 4 01:12:54.204242 systemd[1]: sshd@18-10.0.0.98:22-10.0.0.1:55844.service: Deactivated successfully. Mar 4 01:12:54.208793 systemd-logind[1558]: Session 19 logged out. Waiting for processes to exit. Mar 4 01:12:54.211363 systemd[1]: session-19.scope: Deactivated successfully. Mar 4 01:12:54.213834 systemd-logind[1558]: Removed session 19. Mar 4 01:12:59.200717 systemd[1]: Started sshd@19-10.0.0.98:22-10.0.0.1:39496.service - OpenSSH per-connection server daemon (10.0.0.1:39496). Mar 4 01:12:59.274552 sshd[5996]: Accepted publickey for core from 10.0.0.1 port 39496 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:12:59.277315 sshd[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:12:59.284630 systemd-logind[1558]: New session 20 of user core. Mar 4 01:12:59.295629 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 4 01:12:59.428228 sshd[5996]: pam_unix(sshd:session): session closed for user core Mar 4 01:12:59.437499 systemd[1]: Started sshd@20-10.0.0.98:22-10.0.0.1:39504.service - OpenSSH per-connection server daemon (10.0.0.1:39504). Mar 4 01:12:59.438129 systemd[1]: sshd@19-10.0.0.98:22-10.0.0.1:39496.service: Deactivated successfully. Mar 4 01:12:59.440734 systemd[1]: session-20.scope: Deactivated successfully. Mar 4 01:12:59.442822 systemd-logind[1558]: Session 20 logged out. Waiting for processes to exit. Mar 4 01:12:59.444974 systemd-logind[1558]: Removed session 20. Mar 4 01:12:59.475235 sshd[6010]: Accepted publickey for core from 10.0.0.1 port 39504 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:12:59.476919 sshd[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:12:59.483275 systemd-logind[1558]: New session 21 of user core. Mar 4 01:12:59.499611 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 4 01:12:59.895304 sshd[6010]: pam_unix(sshd:session): session closed for user core Mar 4 01:12:59.905577 systemd[1]: Started sshd@21-10.0.0.98:22-10.0.0.1:39510.service - OpenSSH per-connection server daemon (10.0.0.1:39510). Mar 4 01:12:59.906429 systemd[1]: sshd@20-10.0.0.98:22-10.0.0.1:39504.service: Deactivated successfully. Mar 4 01:12:59.911438 systemd-logind[1558]: Session 21 logged out. Waiting for processes to exit. Mar 4 01:12:59.911666 systemd[1]: session-21.scope: Deactivated successfully. Mar 4 01:12:59.915957 systemd-logind[1558]: Removed session 21. Mar 4 01:12:59.978653 sshd[6022]: Accepted publickey for core from 10.0.0.1 port 39510 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:12:59.981601 sshd[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:12:59.988058 systemd-logind[1558]: New session 22 of user core. Mar 4 01:12:59.995280 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 4 01:13:00.770336 sshd[6022]: pam_unix(sshd:session): session closed for user core Mar 4 01:13:00.774480 systemd[1]: Started sshd@22-10.0.0.98:22-10.0.0.1:39524.service - OpenSSH per-connection server daemon (10.0.0.1:39524). Mar 4 01:13:00.780876 systemd[1]: sshd@21-10.0.0.98:22-10.0.0.1:39510.service: Deactivated successfully. Mar 4 01:13:00.797010 systemd[1]: session-22.scope: Deactivated successfully. Mar 4 01:13:00.802967 systemd-logind[1558]: Session 22 logged out. Waiting for processes to exit. Mar 4 01:13:00.809282 systemd-logind[1558]: Removed session 22. Mar 4 01:13:00.845744 sshd[6049]: Accepted publickey for core from 10.0.0.1 port 39524 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:13:00.847873 sshd[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:13:00.853044 systemd-logind[1558]: New session 23 of user core. Mar 4 01:13:00.859486 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 4 01:13:01.320527 sshd[6049]: pam_unix(sshd:session): session closed for user core Mar 4 01:13:01.337913 systemd[1]: Started sshd@23-10.0.0.98:22-10.0.0.1:39528.service - OpenSSH per-connection server daemon (10.0.0.1:39528). Mar 4 01:13:01.338907 systemd[1]: sshd@22-10.0.0.98:22-10.0.0.1:39524.service: Deactivated successfully. Mar 4 01:13:01.342889 systemd[1]: session-23.scope: Deactivated successfully. Mar 4 01:13:01.346392 systemd-logind[1558]: Session 23 logged out. Waiting for processes to exit. Mar 4 01:13:01.348209 systemd-logind[1558]: Removed session 23. Mar 4 01:13:01.395601 sshd[6065]: Accepted publickey for core from 10.0.0.1 port 39528 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:13:01.404479 sshd[6065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:13:01.410536 systemd-logind[1558]: New session 24 of user core. Mar 4 01:13:01.417530 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 4 01:13:01.574832 sshd[6065]: pam_unix(sshd:session): session closed for user core Mar 4 01:13:01.579712 systemd[1]: sshd@23-10.0.0.98:22-10.0.0.1:39528.service: Deactivated successfully. Mar 4 01:13:01.582731 systemd-logind[1558]: Session 24 logged out. Waiting for processes to exit. Mar 4 01:13:01.582773 systemd[1]: session-24.scope: Deactivated successfully. Mar 4 01:13:01.584564 systemd-logind[1558]: Removed session 24. Mar 4 01:13:06.590570 systemd[1]: Started sshd@24-10.0.0.98:22-10.0.0.1:39540.service - OpenSSH per-connection server daemon (10.0.0.1:39540). Mar 4 01:13:06.629178 sshd[6119]: Accepted publickey for core from 10.0.0.1 port 39540 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:13:06.631637 sshd[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:13:06.637895 systemd-logind[1558]: New session 25 of user core. Mar 4 01:13:06.651918 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 4 01:13:06.823915 sshd[6119]: pam_unix(sshd:session): session closed for user core Mar 4 01:13:06.829205 systemd[1]: sshd@24-10.0.0.98:22-10.0.0.1:39540.service: Deactivated successfully. Mar 4 01:13:06.832563 systemd[1]: session-25.scope: Deactivated successfully. Mar 4 01:13:06.832627 systemd-logind[1558]: Session 25 logged out. Waiting for processes to exit. Mar 4 01:13:06.834753 systemd-logind[1558]: Removed session 25. Mar 4 01:13:11.624370 kubelet[2680]: E0304 01:13:11.624260 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:13:11.837549 systemd[1]: Started sshd@25-10.0.0.98:22-10.0.0.1:41148.service - OpenSSH per-connection server daemon (10.0.0.1:41148). Mar 4 01:13:11.891841 sshd[6160]: Accepted publickey for core from 10.0.0.1 port 41148 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:13:11.895300 sshd[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:13:11.902552 systemd-logind[1558]: New session 26 of user core. Mar 4 01:13:11.918722 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 4 01:13:12.093289 sshd[6160]: pam_unix(sshd:session): session closed for user core Mar 4 01:13:12.100903 systemd[1]: sshd@25-10.0.0.98:22-10.0.0.1:41148.service: Deactivated successfully. Mar 4 01:13:12.104380 systemd[1]: session-26.scope: Deactivated successfully. Mar 4 01:13:12.105720 systemd-logind[1558]: Session 26 logged out. Waiting for processes to exit. Mar 4 01:13:12.107683 systemd-logind[1558]: Removed session 26. Mar 4 01:13:17.128590 systemd[1]: Started sshd@26-10.0.0.98:22-10.0.0.1:41160.service - OpenSSH per-connection server daemon (10.0.0.1:41160). Mar 4 01:13:17.163653 sshd[6206]: Accepted publickey for core from 10.0.0.1 port 41160 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:13:17.165498 sshd[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:13:17.171458 systemd-logind[1558]: New session 27 of user core. Mar 4 01:13:17.181575 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 4 01:13:17.309420 sshd[6206]: pam_unix(sshd:session): session closed for user core Mar 4 01:13:17.314621 systemd[1]: sshd@26-10.0.0.98:22-10.0.0.1:41160.service: Deactivated successfully. Mar 4 01:13:17.317895 systemd[1]: session-27.scope: Deactivated successfully. Mar 4 01:13:17.320138 systemd-logind[1558]: Session 27 logged out. Waiting for processes to exit. Mar 4 01:13:17.321954 systemd-logind[1558]: Removed session 27.