Mar 11 02:23:37.293734 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 10 23:35:49 -00 2026 Mar 11 02:23:37.293763 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e31a968d1cd30cd54d4476ce20b3d9a99d724d392df5e5ae18992ede3943e575 Mar 11 02:23:37.293782 kernel: BIOS-provided physical RAM map: Mar 11 02:23:37.293792 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 11 02:23:37.293800 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 11 02:23:37.293809 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 11 02:23:37.293820 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 11 02:23:37.293957 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 11 02:23:37.294005 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 11 02:23:37.294059 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 11 02:23:37.294071 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 11 02:23:37.294081 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 11 02:23:37.294092 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 11 02:23:37.294102 kernel: NX (Execute Disable) protection: active Mar 11 02:23:37.294111 kernel: APIC: Static calls initialized Mar 11 02:23:37.294127 kernel: SMBIOS 2.8 present. Mar 11 02:23:37.294139 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 11 02:23:37.294148 kernel: Hypervisor detected: KVM Mar 11 02:23:37.294157 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 11 02:23:37.294168 kernel: kvm-clock: using sched offset of 9657711963 cycles Mar 11 02:23:37.294179 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 11 02:23:37.294189 kernel: tsc: Detected 2445.426 MHz processor Mar 11 02:23:37.294199 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 11 02:23:37.294211 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 11 02:23:37.294227 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 11 02:23:37.294237 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 11 02:23:37.294248 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 11 02:23:37.294259 kernel: Using GB pages for direct mapping Mar 11 02:23:37.294268 kernel: ACPI: Early table checksum verification disabled Mar 11 02:23:37.294279 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 11 02:23:37.294290 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:23:37.294299 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:23:37.294310 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:23:37.294325 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 11 02:23:37.294334 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:23:37.294345 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:23:37.294356 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:23:37.294421 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:23:37.294432 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 11 02:23:37.294442 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 11 02:23:37.294459 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 11 02:23:37.294474 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 11 02:23:37.294486 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 11 02:23:37.294496 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 11 02:23:37.294507 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 11 02:23:37.294518 kernel: No NUMA configuration found Mar 11 02:23:37.294529 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 11 02:23:37.294541 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 11 02:23:37.294555 kernel: Zone ranges: Mar 11 02:23:37.294567 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 11 02:23:37.294577 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 11 02:23:37.294587 kernel: Normal empty Mar 11 02:23:37.294599 kernel: Movable zone start for each node Mar 11 02:23:37.294610 kernel: Early memory node ranges Mar 11 02:23:37.294621 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 11 02:23:37.294630 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 11 02:23:37.294641 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 11 02:23:37.294656 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 11 02:23:37.294666 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 11 02:23:37.294678 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 11 02:23:37.294689 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 11 02:23:37.294699 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 11 02:23:37.294711 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 11 02:23:37.294721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 11 02:23:37.294731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 11 02:23:37.294743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 11 02:23:37.294757 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 11 02:23:37.294769 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 11 02:23:37.294779 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 11 02:23:37.294791 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 11 02:23:37.294800 kernel: TSC deadline timer available Mar 11 02:23:37.294811 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 11 02:23:37.294822 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 11 02:23:37.294910 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 11 02:23:37.294921 kernel: kvm-guest: setup PV sched yield Mar 11 02:23:37.294938 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 11 02:23:37.294947 kernel: Booting paravirtualized kernel on KVM Mar 11 02:23:37.294959 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 11 02:23:37.294971 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 11 02:23:37.294981 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 11 02:23:37.294993 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 11 02:23:37.295002 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 11 02:23:37.295013 kernel: kvm-guest: PV spinlocks enabled Mar 11 02:23:37.295024 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 11 02:23:37.295041 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e31a968d1cd30cd54d4476ce20b3d9a99d724d392df5e5ae18992ede3943e575 Mar 11 02:23:37.295053 kernel: random: crng init done Mar 11 02:23:37.295063 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 11 02:23:37.295075 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 11 02:23:37.295089 kernel: Fallback order for Node 0: 0 Mar 11 02:23:37.295100 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 11 02:23:37.295111 kernel: Policy zone: DMA32 Mar 11 02:23:37.295121 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 11 02:23:37.295137 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 11 02:23:37.295147 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 11 02:23:37.295159 kernel: ftrace: allocating 37996 entries in 149 pages Mar 11 02:23:37.295168 kernel: ftrace: allocated 149 pages with 4 groups Mar 11 02:23:37.295179 kernel: Dynamic Preempt: voluntary Mar 11 02:23:37.295192 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 11 02:23:37.295202 kernel: rcu: RCU event tracing is enabled. Mar 11 02:23:37.295213 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 11 02:23:37.295225 kernel: Trampoline variant of Tasks RCU enabled. Mar 11 02:23:37.295242 kernel: Rude variant of Tasks RCU enabled. Mar 11 02:23:37.295252 kernel: Tracing variant of Tasks RCU enabled. Mar 11 02:23:37.295263 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 11 02:23:37.295274 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 11 02:23:37.295287 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 11 02:23:37.295298 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 11 02:23:37.295309 kernel: Console: colour VGA+ 80x25 Mar 11 02:23:37.295318 kernel: printk: console [ttyS0] enabled Mar 11 02:23:37.295329 kernel: ACPI: Core revision 20230628 Mar 11 02:23:37.295342 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 11 02:23:37.295357 kernel: APIC: Switch to symmetric I/O mode setup Mar 11 02:23:37.295423 kernel: x2apic enabled Mar 11 02:23:37.295433 kernel: APIC: Switched APIC routing to: physical x2apic Mar 11 02:23:37.295443 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 11 02:23:37.295456 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 11 02:23:37.295466 kernel: kvm-guest: setup PV IPIs Mar 11 02:23:37.295477 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 11 02:23:37.295506 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 11 02:23:37.295518 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 11 02:23:37.295531 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 11 02:23:37.295542 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 11 02:23:37.295557 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 11 02:23:37.295569 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 11 02:23:37.295580 kernel: Spectre V2 : Mitigation: Retpolines Mar 11 02:23:37.295592 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 11 02:23:37.295603 kernel: Speculative Store Bypass: Vulnerable Mar 11 02:23:37.295617 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 11 02:23:37.295629 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 11 02:23:37.295640 kernel: active return thunk: srso_alias_return_thunk Mar 11 02:23:37.295652 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 11 02:23:37.295663 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 11 02:23:37.295674 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 11 02:23:37.295685 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 11 02:23:37.295697 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 11 02:23:37.295713 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 11 02:23:37.295725 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 11 02:23:37.295737 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 11 02:23:37.295749 kernel: Freeing SMP alternatives memory: 32K Mar 11 02:23:37.295761 kernel: pid_max: default: 32768 minimum: 301 Mar 11 02:23:37.295773 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 11 02:23:37.295785 kernel: landlock: Up and running. Mar 11 02:23:37.295796 kernel: SELinux: Initializing. Mar 11 02:23:37.295808 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 02:23:37.295825 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 02:23:37.295941 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 11 02:23:37.295953 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 02:23:37.295964 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 02:23:37.295976 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 02:23:37.295986 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 11 02:23:37.295993 kernel: signal: max sigframe size: 1776 Mar 11 02:23:37.296000 kernel: rcu: Hierarchical SRCU implementation. Mar 11 02:23:37.296007 kernel: rcu: Max phase no-delay instances is 400. Mar 11 02:23:37.296019 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 11 02:23:37.296031 kernel: smp: Bringing up secondary CPUs ... Mar 11 02:23:37.296044 kernel: smpboot: x86: Booting SMP configuration: Mar 11 02:23:37.296054 kernel: .... node #0, CPUs: #1 #2 #3 Mar 11 02:23:37.296063 kernel: smp: Brought up 1 node, 4 CPUs Mar 11 02:23:37.296075 kernel: smpboot: Max logical packages: 1 Mar 11 02:23:37.296086 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 11 02:23:37.296097 kernel: devtmpfs: initialized Mar 11 02:23:37.296109 kernel: x86/mm: Memory block size: 128MB Mar 11 02:23:37.296125 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 11 02:23:37.296138 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 11 02:23:37.296149 kernel: pinctrl core: initialized pinctrl subsystem Mar 11 02:23:37.296161 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 11 02:23:37.296172 kernel: audit: initializing netlink subsys (disabled) Mar 11 02:23:37.296184 kernel: audit: type=2000 audit(1773195813.748:1): state=initialized audit_enabled=0 res=1 Mar 11 02:23:37.296195 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 11 02:23:37.296206 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 11 02:23:37.296218 kernel: cpuidle: using governor menu Mar 11 02:23:37.296235 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 11 02:23:37.296246 kernel: dca service started, version 1.12.1 Mar 11 02:23:37.296255 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 11 02:23:37.296267 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 11 02:23:37.296279 kernel: PCI: Using configuration type 1 for base access Mar 11 02:23:37.296291 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 11 02:23:37.296303 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 11 02:23:37.296314 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 11 02:23:37.296327 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 11 02:23:37.296344 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 11 02:23:37.296355 kernel: ACPI: Added _OSI(Module Device) Mar 11 02:23:37.296422 kernel: ACPI: Added _OSI(Processor Device) Mar 11 02:23:37.296434 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 11 02:23:37.296444 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 11 02:23:37.296457 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 11 02:23:37.296467 kernel: ACPI: Interpreter enabled Mar 11 02:23:37.296478 kernel: ACPI: PM: (supports S0 S3 S5) Mar 11 02:23:37.296490 kernel: ACPI: Using IOAPIC for interrupt routing Mar 11 02:23:37.296506 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 11 02:23:37.296517 kernel: PCI: Using E820 reservations for host bridge windows Mar 11 02:23:37.296528 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 11 02:23:37.296539 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 11 02:23:37.296742 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 11 02:23:37.296997 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 11 02:23:37.297124 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 11 02:23:37.297138 kernel: PCI host bridge to bus 0000:00 Mar 11 02:23:37.297262 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 11 02:23:37.297457 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 11 02:23:37.297595 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 11 02:23:37.297706 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 11 02:23:37.297815 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 11 02:23:37.298037 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 11 02:23:37.298159 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 11 02:23:37.298440 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 11 02:23:37.298648 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 11 02:23:37.298817 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 11 02:23:37.299056 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 11 02:23:37.299180 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 11 02:23:37.299300 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 11 02:23:37.299550 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 11 02:23:37.299681 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 11 02:23:37.299802 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 11 02:23:37.300092 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 11 02:23:37.300276 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 11 02:23:37.300502 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 11 02:23:37.300645 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 11 02:23:37.300773 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 11 02:23:37.301046 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 11 02:23:37.301233 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 11 02:23:37.301477 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 11 02:23:37.301671 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 11 02:23:37.301989 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 11 02:23:37.302193 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 11 02:23:37.302441 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 11 02:23:37.302647 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 11 02:23:37.302929 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 11 02:23:37.303122 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 11 02:23:37.303323 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 11 02:23:37.303564 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 11 02:23:37.303589 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 11 02:23:37.303603 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 11 02:23:37.303615 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 11 02:23:37.303627 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 11 02:23:37.303638 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 11 02:23:37.303650 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 11 02:23:37.303662 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 11 02:23:37.303675 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 11 02:23:37.303685 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 11 02:23:37.303703 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 11 02:23:37.303715 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 11 02:23:37.303726 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 11 02:23:37.303737 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 11 02:23:37.303749 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 11 02:23:37.303761 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 11 02:23:37.303773 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 11 02:23:37.303783 kernel: iommu: Default domain type: Translated Mar 11 02:23:37.303796 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 11 02:23:37.303813 kernel: PCI: Using ACPI for IRQ routing Mar 11 02:23:37.303823 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 11 02:23:37.303922 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 11 02:23:37.303934 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 11 02:23:37.304122 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 11 02:23:37.304311 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 11 02:23:37.304555 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 11 02:23:37.304575 kernel: vgaarb: loaded Mar 11 02:23:37.304592 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 11 02:23:37.304605 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 11 02:23:37.304617 kernel: clocksource: Switched to clocksource kvm-clock Mar 11 02:23:37.304630 kernel: VFS: Disk quotas dquot_6.6.0 Mar 11 02:23:37.304640 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 11 02:23:37.304651 kernel: pnp: PnP ACPI init Mar 11 02:23:37.304977 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 11 02:23:37.304999 kernel: pnp: PnP ACPI: found 6 devices Mar 11 02:23:37.305018 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 11 02:23:37.305028 kernel: NET: Registered PF_INET protocol family Mar 11 02:23:37.305041 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 11 02:23:37.305053 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 11 02:23:37.305066 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 11 02:23:37.305076 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 11 02:23:37.305092 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 11 02:23:37.305104 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 11 02:23:37.305116 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 02:23:37.305131 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 02:23:37.305144 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 11 02:23:37.305156 kernel: NET: Registered PF_XDP protocol family Mar 11 02:23:37.305331 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 11 02:23:37.305560 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 11 02:23:37.305737 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 11 02:23:37.306038 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 11 02:23:37.306214 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 11 02:23:37.306685 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 11 02:23:37.306710 kernel: PCI: CLS 0 bytes, default 64 Mar 11 02:23:37.306722 kernel: Initialise system trusted keyrings Mar 11 02:23:37.306735 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 11 02:23:37.306746 kernel: Key type asymmetric registered Mar 11 02:23:37.306759 kernel: Asymmetric key parser 'x509' registered Mar 11 02:23:37.306769 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 11 02:23:37.306782 kernel: io scheduler mq-deadline registered Mar 11 02:23:37.306793 kernel: io scheduler kyber registered Mar 11 02:23:37.306806 kernel: io scheduler bfq registered Mar 11 02:23:37.306822 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 11 02:23:37.306920 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 11 02:23:37.306932 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 11 02:23:37.306945 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 11 02:23:37.306957 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 11 02:23:37.306970 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 11 02:23:37.306979 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 11 02:23:37.306992 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 11 02:23:37.307004 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 11 02:23:37.307205 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 11 02:23:37.307225 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 11 02:23:37.307458 kernel: rtc_cmos 00:04: registered as rtc0 Mar 11 02:23:37.307640 kernel: rtc_cmos 00:04: setting system clock to 2026-03-11T02:23:36 UTC (1773195816) Mar 11 02:23:37.307817 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 11 02:23:37.307914 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 11 02:23:37.307926 kernel: NET: Registered PF_INET6 protocol family Mar 11 02:23:37.307945 kernel: Segment Routing with IPv6 Mar 11 02:23:37.307957 kernel: In-situ OAM (IOAM) with IPv6 Mar 11 02:23:37.307969 kernel: NET: Registered PF_PACKET protocol family Mar 11 02:23:37.307979 kernel: Key type dns_resolver registered Mar 11 02:23:37.307992 kernel: IPI shorthand broadcast: enabled Mar 11 02:23:37.308004 kernel: sched_clock: Marking stable (1534027415, 1151287246)->(3603956387, -918641726) Mar 11 02:23:37.308016 kernel: registered taskstats version 1 Mar 11 02:23:37.308026 kernel: Loading compiled-in X.509 certificates Mar 11 02:23:37.308039 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6607fbe6d184c26ff6db73f5ff7c44b69c5a8579' Mar 11 02:23:37.308050 kernel: Key type .fscrypt registered Mar 11 02:23:37.308067 kernel: Key type fscrypt-provisioning registered Mar 11 02:23:37.308079 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 11 02:23:37.308091 kernel: ima: Allocated hash algorithm: sha1 Mar 11 02:23:37.308103 kernel: ima: No architecture policies found Mar 11 02:23:37.308115 kernel: clk: Disabling unused clocks Mar 11 02:23:37.308126 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 11 02:23:37.308139 kernel: Write protecting the kernel read-only data: 36864k Mar 11 02:23:37.308151 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 11 02:23:37.308167 kernel: Run /init as init process Mar 11 02:23:37.308179 kernel: with arguments: Mar 11 02:23:37.308191 kernel: /init Mar 11 02:23:37.308203 kernel: with environment: Mar 11 02:23:37.308213 kernel: HOME=/ Mar 11 02:23:37.308225 kernel: TERM=linux Mar 11 02:23:37.308239 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 02:23:37.308254 systemd[1]: Detected virtualization kvm. Mar 11 02:23:37.308272 systemd[1]: Detected architecture x86-64. Mar 11 02:23:37.308285 systemd[1]: Running in initrd. Mar 11 02:23:37.308298 systemd[1]: No hostname configured, using default hostname. Mar 11 02:23:37.308309 systemd[1]: Hostname set to . Mar 11 02:23:37.308323 systemd[1]: Initializing machine ID from VM UUID. Mar 11 02:23:37.308336 systemd[1]: Queued start job for default target initrd.target. Mar 11 02:23:37.308346 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 02:23:37.308407 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 02:23:37.308431 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 11 02:23:37.308443 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 02:23:37.308456 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 11 02:23:37.308469 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 11 02:23:37.308485 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 11 02:23:37.308496 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 11 02:23:37.308509 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 02:23:37.308526 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 02:23:37.308539 systemd[1]: Reached target paths.target - Path Units. Mar 11 02:23:37.308550 systemd[1]: Reached target slices.target - Slice Units. Mar 11 02:23:37.308564 systemd[1]: Reached target swap.target - Swaps. Mar 11 02:23:37.308594 systemd[1]: Reached target timers.target - Timer Units. Mar 11 02:23:37.308612 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 02:23:37.308629 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 02:23:37.308641 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 11 02:23:37.308654 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 11 02:23:37.308667 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 02:23:37.308680 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 02:23:37.308692 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 02:23:37.308705 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 02:23:37.308718 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 11 02:23:37.308731 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 02:23:37.308749 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 11 02:23:37.308763 systemd[1]: Starting systemd-fsck-usr.service... Mar 11 02:23:37.308776 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 02:23:37.308787 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 02:23:37.308802 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:23:37.308815 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 11 02:23:37.308827 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 02:23:37.308924 systemd[1]: Finished systemd-fsck-usr.service. Mar 11 02:23:37.308972 systemd-journald[195]: Collecting audit messages is disabled. Mar 11 02:23:37.309006 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 11 02:23:37.309020 systemd-journald[195]: Journal started Mar 11 02:23:37.309046 systemd-journald[195]: Runtime Journal (/run/log/journal/1bb7d9c7c1734b31970cd488a1160c5b) is 6.0M, max 48.4M, 42.3M free. Mar 11 02:23:37.283446 systemd-modules-load[196]: Inserted module 'overlay' Mar 11 02:23:37.568205 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 11 02:23:37.568252 kernel: Bridge firewalling registered Mar 11 02:23:37.568268 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 02:23:37.320488 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 11 02:23:37.580972 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 02:23:37.590761 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:23:37.603450 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 02:23:37.634190 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 02:23:37.641013 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 02:23:37.657828 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 02:23:37.666046 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 02:23:37.675353 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:23:37.686206 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:23:37.697182 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 02:23:37.707243 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 02:23:37.732233 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 11 02:23:37.736317 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 02:23:37.750570 dracut-cmdline[230]: dracut-dracut-053 Mar 11 02:23:37.754160 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e31a968d1cd30cd54d4476ce20b3d9a99d724d392df5e5ae18992ede3943e575 Mar 11 02:23:37.821068 systemd-resolved[232]: Positive Trust Anchors: Mar 11 02:23:37.821120 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 02:23:37.821162 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 02:23:37.825169 systemd-resolved[232]: Defaulting to hostname 'linux'. Mar 11 02:23:37.826827 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 02:23:37.837184 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 02:23:37.896980 kernel: SCSI subsystem initialized Mar 11 02:23:37.912933 kernel: Loading iSCSI transport class v2.0-870. Mar 11 02:23:37.931991 kernel: iscsi: registered transport (tcp) Mar 11 02:23:37.965730 kernel: iscsi: registered transport (qla4xxx) Mar 11 02:23:37.965796 kernel: QLogic iSCSI HBA Driver Mar 11 02:23:38.033454 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 11 02:23:38.050241 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 11 02:23:38.090888 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 11 02:23:38.090960 kernel: device-mapper: uevent: version 1.0.3 Mar 11 02:23:38.096889 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 11 02:23:38.153006 kernel: raid6: avx2x4 gen() 33120 MB/s Mar 11 02:23:38.171953 kernel: raid6: avx2x2 gen() 25023 MB/s Mar 11 02:23:38.194167 kernel: raid6: avx2x1 gen() 13290 MB/s Mar 11 02:23:38.194225 kernel: raid6: using algorithm avx2x4 gen() 33120 MB/s Mar 11 02:23:38.216702 kernel: raid6: .... xor() 4393 MB/s, rmw enabled Mar 11 02:23:38.216764 kernel: raid6: using avx2x2 recovery algorithm Mar 11 02:23:38.241983 kernel: xor: automatically using best checksumming function avx Mar 11 02:23:38.427947 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 11 02:23:38.443576 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 11 02:23:38.463094 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 02:23:38.484729 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 11 02:23:38.492763 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 02:23:38.517080 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 11 02:23:38.535581 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Mar 11 02:23:38.577329 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 02:23:38.603078 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 02:23:38.686326 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 02:23:38.705070 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 11 02:23:38.728151 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 11 02:23:38.732739 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 02:23:38.735645 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 02:23:38.742533 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 02:23:38.770818 kernel: cryptd: max_cpu_qlen set to 1000 Mar 11 02:23:38.769603 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 11 02:23:38.806006 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 11 02:23:38.805767 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 11 02:23:38.812517 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 02:23:38.849740 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 11 02:23:38.850531 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 11 02:23:38.850553 kernel: GPT:9289727 != 19775487 Mar 11 02:23:38.850574 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 11 02:23:38.850590 kernel: GPT:9289727 != 19775487 Mar 11 02:23:38.850605 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 11 02:23:38.850620 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:23:38.812631 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:23:38.850085 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 02:23:38.851827 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 02:23:38.852084 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:23:38.863810 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:23:38.889790 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:23:38.907187 kernel: libata version 3.00 loaded. Mar 11 02:23:38.931001 kernel: ahci 0000:00:1f.2: version 3.0 Mar 11 02:23:38.931304 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 11 02:23:38.938053 kernel: AVX2 version of gcm_enc/dec engaged. Mar 11 02:23:38.938088 kernel: AES CTR mode by8 optimization enabled Mar 11 02:23:38.938113 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 11 02:23:38.938491 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 11 02:23:38.954923 kernel: scsi host0: ahci Mar 11 02:23:38.955224 kernel: scsi host1: ahci Mar 11 02:23:38.956030 kernel: scsi host2: ahci Mar 11 02:23:38.956265 kernel: scsi host3: ahci Mar 11 02:23:38.956999 kernel: BTRFS: device fsid 1c1071f5-2e45-4924-9ec8-a67042aa7fbc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (465) Mar 11 02:23:38.958945 kernel: scsi host4: ahci Mar 11 02:23:38.965998 kernel: scsi host5: ahci Mar 11 02:23:38.966208 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 11 02:23:38.966221 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 11 02:23:38.966232 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 11 02:23:38.966242 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 11 02:23:38.966251 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 11 02:23:38.966267 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 11 02:23:38.971108 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (469) Mar 11 02:23:38.972940 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 11 02:23:39.213060 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:23:39.222605 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 11 02:23:39.225983 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 11 02:23:39.245129 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 11 02:23:39.263338 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 11 02:23:39.278225 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 11 02:23:39.307212 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 11 02:23:39.307244 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 11 02:23:39.307261 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 11 02:23:39.307277 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 11 02:23:39.307301 kernel: ata3.00: applying bridge limits Mar 11 02:23:39.307318 kernel: ata3.00: configured for UDMA/100 Mar 11 02:23:39.306333 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 02:23:39.310711 disk-uuid[554]: Primary Header is updated. Mar 11 02:23:39.310711 disk-uuid[554]: Secondary Entries is updated. Mar 11 02:23:39.310711 disk-uuid[554]: Secondary Header is updated. Mar 11 02:23:39.321660 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:23:39.339169 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 11 02:23:39.339276 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 11 02:23:39.341937 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 11 02:23:39.341995 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 11 02:23:39.351962 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:23:39.446191 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 11 02:23:39.446675 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 11 02:23:39.460977 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 11 02:23:40.315977 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:23:40.316957 disk-uuid[555]: The operation has completed successfully. Mar 11 02:23:40.364110 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 11 02:23:40.364346 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 11 02:23:40.392366 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 11 02:23:40.405147 sh[590]: Success Mar 11 02:23:40.432042 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 11 02:23:40.481561 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 11 02:23:40.500077 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 11 02:23:40.511051 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 11 02:23:40.540795 kernel: BTRFS info (device dm-0): first mount of filesystem 1c1071f5-2e45-4924-9ec8-a67042aa7fbc Mar 11 02:23:40.540944 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:23:40.540967 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 11 02:23:40.549775 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 11 02:23:40.549807 kernel: BTRFS info (device dm-0): using free space tree Mar 11 02:23:40.570608 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 11 02:23:40.575568 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 11 02:23:40.594133 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 11 02:23:40.600033 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 11 02:23:40.644253 kernel: BTRFS info (device vda6): first mount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:23:40.644336 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:23:40.644350 kernel: BTRFS info (device vda6): using free space tree Mar 11 02:23:40.657010 kernel: BTRFS info (device vda6): auto enabling async discard Mar 11 02:23:40.674635 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 11 02:23:40.686315 kernel: BTRFS info (device vda6): last unmount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:23:40.693826 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 11 02:23:40.714152 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 11 02:23:40.795146 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 02:23:40.794792 ignition[720]: Ignition 2.19.0 Mar 11 02:23:40.794800 ignition[720]: Stage: fetch-offline Mar 11 02:23:40.794932 ignition[720]: no configs at "/usr/lib/ignition/base.d" Mar 11 02:23:40.794945 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:23:40.795198 ignition[720]: parsed url from cmdline: "" Mar 11 02:23:40.795202 ignition[720]: no config URL provided Mar 11 02:23:40.795208 ignition[720]: reading system config file "/usr/lib/ignition/user.ign" Mar 11 02:23:40.795301 ignition[720]: no config at "/usr/lib/ignition/user.ign" Mar 11 02:23:40.795329 ignition[720]: op(1): [started] loading QEMU firmware config module Mar 11 02:23:40.795334 ignition[720]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 11 02:23:40.810555 ignition[720]: op(1): [finished] loading QEMU firmware config module Mar 11 02:23:40.872262 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 02:23:40.920527 systemd-networkd[778]: lo: Link UP Mar 11 02:23:40.920574 systemd-networkd[778]: lo: Gained carrier Mar 11 02:23:40.922945 systemd-networkd[778]: Enumeration completed Mar 11 02:23:40.924381 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:23:40.924435 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 02:23:40.925990 systemd-networkd[778]: eth0: Link UP Mar 11 02:23:40.925996 systemd-networkd[778]: eth0: Gained carrier Mar 11 02:23:40.926007 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:23:40.952149 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 02:23:40.977946 systemd[1]: Reached target network.target - Network. Mar 11 02:23:41.006031 systemd-networkd[778]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 11 02:23:41.205058 ignition[720]: parsing config with SHA512: 950703c0e7e7b68390199bea29aca305d37bf88792f8b0addf2bdcb90c6e037aa4c9025019691be3ca0f15c79ed590e26f4da26111c8a751dd5326c93d16dcb2 Mar 11 02:23:41.210601 unknown[720]: fetched base config from "system" Mar 11 02:23:41.210614 unknown[720]: fetched user config from "qemu" Mar 11 02:23:41.211772 ignition[720]: fetch-offline: fetch-offline passed Mar 11 02:23:41.215463 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 02:23:41.211914 ignition[720]: Ignition finished successfully Mar 11 02:23:41.223620 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 11 02:23:41.236157 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 11 02:23:41.266703 ignition[782]: Ignition 2.19.0 Mar 11 02:23:41.266740 ignition[782]: Stage: kargs Mar 11 02:23:41.266973 ignition[782]: no configs at "/usr/lib/ignition/base.d" Mar 11 02:23:41.270465 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 11 02:23:41.266986 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:23:41.267801 ignition[782]: kargs: kargs passed Mar 11 02:23:41.267914 ignition[782]: Ignition finished successfully Mar 11 02:23:41.290469 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 11 02:23:41.316015 ignition[791]: Ignition 2.19.0 Mar 11 02:23:41.316052 ignition[791]: Stage: disks Mar 11 02:23:41.316205 ignition[791]: no configs at "/usr/lib/ignition/base.d" Mar 11 02:23:41.316216 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:23:41.317139 ignition[791]: disks: disks passed Mar 11 02:23:41.317184 ignition[791]: Ignition finished successfully Mar 11 02:23:41.338793 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 11 02:23:41.342375 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 11 02:23:41.350769 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 11 02:23:41.354803 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 02:23:41.366734 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 02:23:41.376772 systemd[1]: Reached target basic.target - Basic System. Mar 11 02:23:41.399532 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 11 02:23:41.426187 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 11 02:23:41.435259 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 11 02:23:41.464122 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 11 02:23:41.613932 kernel: EXT4-fs (vda9): mounted filesystem ec53a244-36b1-4b02-8fe8-880c05c7af60 r/w with ordered data mode. Quota mode: none. Mar 11 02:23:41.614350 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 11 02:23:41.618482 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 11 02:23:41.636128 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 02:23:41.641161 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 11 02:23:41.649161 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 11 02:23:41.662678 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (809) Mar 11 02:23:41.649231 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 11 02:23:41.714696 kernel: BTRFS info (device vda6): first mount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:23:41.714743 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:23:41.714760 kernel: BTRFS info (device vda6): using free space tree Mar 11 02:23:41.714778 kernel: BTRFS info (device vda6): auto enabling async discard Mar 11 02:23:41.649272 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 02:23:41.655701 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 11 02:23:41.725621 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 02:23:41.744345 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 11 02:23:41.812246 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Mar 11 02:23:41.823001 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Mar 11 02:23:41.833904 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Mar 11 02:23:41.844208 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Mar 11 02:23:42.025961 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 11 02:23:42.040182 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 11 02:23:42.051361 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 11 02:23:42.070538 kernel: BTRFS info (device vda6): last unmount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:23:42.056467 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 11 02:23:42.097664 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 11 02:23:42.109066 ignition[923]: INFO : Ignition 2.19.0 Mar 11 02:23:42.109066 ignition[923]: INFO : Stage: mount Mar 11 02:23:42.126081 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 02:23:42.126081 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:23:42.126081 ignition[923]: INFO : mount: mount passed Mar 11 02:23:42.126081 ignition[923]: INFO : Ignition finished successfully Mar 11 02:23:42.112294 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 11 02:23:42.139117 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 11 02:23:42.327361 systemd-networkd[778]: eth0: Gained IPv6LL Mar 11 02:23:42.627273 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 02:23:42.642998 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Mar 11 02:23:42.655595 kernel: BTRFS info (device vda6): first mount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:23:42.655669 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:23:42.655690 kernel: BTRFS info (device vda6): using free space tree Mar 11 02:23:42.672972 kernel: BTRFS info (device vda6): auto enabling async discard Mar 11 02:23:42.676376 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 02:23:42.716655 ignition[954]: INFO : Ignition 2.19.0 Mar 11 02:23:42.716655 ignition[954]: INFO : Stage: files Mar 11 02:23:42.723489 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 02:23:42.723489 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:23:42.723489 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Mar 11 02:23:42.723489 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 11 02:23:42.723489 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 11 02:23:42.750631 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 11 02:23:42.750631 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 11 02:23:42.750631 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 11 02:23:42.750631 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 11 02:23:42.750631 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 11 02:23:42.750631 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 11 02:23:42.750631 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 11 02:23:42.727091 unknown[954]: wrote ssh authorized keys file for user: core Mar 11 02:23:42.816784 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 11 02:23:42.887742 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 11 02:23:42.887742 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 11 02:23:42.907314 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 11 02:23:42.907314 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 11 02:23:42.907314 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 11 02:23:42.907314 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 02:23:42.907314 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 02:23:42.907314 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 02:23:42.907314 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 02:23:42.907314 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 02:23:42.907314 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 02:23:42.907314 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 11 02:23:42.907314 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 11 02:23:42.907314 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 11 02:23:42.907314 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 11 02:23:43.190082 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 11 02:23:43.679126 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 11 02:23:43.679126 ignition[954]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 11 02:23:43.694395 ignition[954]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 11 02:23:43.694395 ignition[954]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 11 02:23:43.694395 ignition[954]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 11 02:23:43.694395 ignition[954]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 11 02:23:43.694395 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 02:23:43.694395 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 02:23:43.694395 ignition[954]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 11 02:23:43.694395 ignition[954]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Mar 11 02:23:43.694395 ignition[954]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 11 02:23:43.694395 ignition[954]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 11 02:23:43.694395 ignition[954]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Mar 11 02:23:43.694395 ignition[954]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Mar 11 02:23:43.816584 ignition[954]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 11 02:23:43.816584 ignition[954]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 11 02:23:43.816584 ignition[954]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Mar 11 02:23:43.816584 ignition[954]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Mar 11 02:23:43.816584 ignition[954]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Mar 11 02:23:43.816584 ignition[954]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 11 02:23:43.816584 ignition[954]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 11 02:23:43.816584 ignition[954]: INFO : files: files passed Mar 11 02:23:43.816584 ignition[954]: INFO : Ignition finished successfully Mar 11 02:23:43.742216 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 11 02:23:43.764242 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 11 02:23:43.771562 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 11 02:23:43.959466 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Mar 11 02:23:43.772207 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 11 02:23:43.992220 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 02:23:43.992220 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 11 02:23:43.772354 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 11 02:23:44.097552 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 02:23:43.793169 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 02:23:43.801090 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 11 02:23:43.832226 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 11 02:23:43.873212 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 11 02:23:43.873358 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 11 02:23:43.879494 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 11 02:23:43.890894 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 11 02:23:44.199674 ignition[1004]: INFO : Ignition 2.19.0 Mar 11 02:23:44.199674 ignition[1004]: INFO : Stage: umount Mar 11 02:23:44.199674 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 02:23:44.199674 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:23:44.199674 ignition[1004]: INFO : umount: umount passed Mar 11 02:23:44.199674 ignition[1004]: INFO : Ignition finished successfully Mar 11 02:23:43.895043 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 11 02:23:43.918383 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 11 02:23:43.934888 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 02:23:43.946651 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 11 02:23:43.970976 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 11 02:23:43.976117 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 02:23:43.981789 systemd[1]: Stopped target timers.target - Timer Units. Mar 11 02:23:43.986482 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 11 02:23:43.986609 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 02:23:43.992295 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 11 02:23:43.996788 systemd[1]: Stopped target basic.target - Basic System. Mar 11 02:23:44.009391 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 11 02:23:44.020690 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 02:23:44.026259 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 11 02:23:44.031085 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 11 02:23:44.035313 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 02:23:44.040770 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 11 02:23:44.046323 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 11 02:23:44.051646 systemd[1]: Stopped target swap.target - Swaps. Mar 11 02:23:44.055034 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 11 02:23:44.055120 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 11 02:23:44.060988 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 11 02:23:44.067006 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 02:23:44.073577 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 11 02:23:44.073971 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 02:23:44.080465 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 11 02:23:44.080542 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 11 02:23:44.085655 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 11 02:23:44.085711 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 02:23:44.094201 systemd[1]: Stopped target paths.target - Path Units. Mar 11 02:23:44.097349 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 11 02:23:44.097684 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 02:23:44.109047 systemd[1]: Stopped target slices.target - Slice Units. Mar 11 02:23:44.112291 systemd[1]: Stopped target sockets.target - Socket Units. Mar 11 02:23:44.116785 systemd[1]: iscsid.socket: Deactivated successfully. Mar 11 02:23:44.117002 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 02:23:44.121615 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 11 02:23:44.121729 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 02:23:44.126474 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 11 02:23:44.126557 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 02:23:44.132129 systemd[1]: ignition-files.service: Deactivated successfully. Mar 11 02:23:44.634175 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 11 02:23:44.132197 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 11 02:23:44.158105 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 11 02:23:44.166821 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 11 02:23:44.166989 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 02:23:44.176062 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 11 02:23:44.182608 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 11 02:23:44.182726 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 02:23:44.189154 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 11 02:23:44.189242 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 02:23:44.202335 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 11 02:23:44.202661 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 11 02:23:44.212182 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 11 02:23:44.212367 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 11 02:23:44.224411 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 11 02:23:44.229374 systemd[1]: Stopped target network.target - Network. Mar 11 02:23:44.236532 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 11 02:23:44.236628 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 11 02:23:44.250197 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 11 02:23:44.250287 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 11 02:23:44.256338 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 11 02:23:44.256403 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 11 02:23:44.262310 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 11 02:23:44.262383 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 11 02:23:44.268067 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 11 02:23:44.278040 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 11 02:23:44.291526 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 11 02:23:44.291768 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 11 02:23:44.297925 systemd-networkd[778]: eth0: DHCPv6 lease lost Mar 11 02:23:44.305218 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 11 02:23:44.305523 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 11 02:23:44.323272 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 11 02:23:44.323475 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 11 02:23:44.338729 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 11 02:23:44.338783 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 11 02:23:44.352803 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 11 02:23:44.352964 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 11 02:23:44.376097 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 11 02:23:44.385310 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 11 02:23:44.385401 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 02:23:44.394761 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 11 02:23:44.394822 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:23:44.405263 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 11 02:23:44.405343 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 11 02:23:44.410373 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 11 02:23:44.410476 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 02:23:44.419242 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 02:23:44.447387 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 11 02:23:44.447661 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 02:23:44.455908 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 11 02:23:44.456053 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 11 02:23:44.466185 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 11 02:23:44.466271 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 11 02:23:44.472753 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 11 02:23:44.472796 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 02:23:44.477502 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 11 02:23:44.477561 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 11 02:23:44.481815 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 11 02:23:44.481933 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 11 02:23:44.490607 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 02:23:44.490659 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:23:44.511329 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 11 02:23:44.519364 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 11 02:23:44.519521 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 02:23:44.529647 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 02:23:44.529725 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:23:44.541753 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 11 02:23:44.542000 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 11 02:23:44.551411 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 11 02:23:44.578230 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 11 02:23:44.592139 systemd[1]: Switching root. Mar 11 02:23:44.746374 systemd-journald[195]: Journal stopped Mar 11 02:23:46.177479 kernel: SELinux: policy capability network_peer_controls=1 Mar 11 02:23:46.177564 kernel: SELinux: policy capability open_perms=1 Mar 11 02:23:46.177580 kernel: SELinux: policy capability extended_socket_class=1 Mar 11 02:23:46.177601 kernel: SELinux: policy capability always_check_network=0 Mar 11 02:23:46.177611 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 11 02:23:46.177621 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 11 02:23:46.177631 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 11 02:23:46.177641 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 11 02:23:46.177651 kernel: audit: type=1403 audit(1773195824.875:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 11 02:23:46.177663 systemd[1]: Successfully loaded SELinux policy in 61.374ms. Mar 11 02:23:46.177682 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.567ms. Mar 11 02:23:46.177696 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 02:23:46.177707 systemd[1]: Detected virtualization kvm. Mar 11 02:23:46.177717 systemd[1]: Detected architecture x86-64. Mar 11 02:23:46.177728 systemd[1]: Detected first boot. Mar 11 02:23:46.177739 systemd[1]: Initializing machine ID from VM UUID. Mar 11 02:23:46.177749 zram_generator::config[1069]: No configuration found. Mar 11 02:23:46.177761 systemd[1]: Populated /etc with preset unit settings. Mar 11 02:23:46.177772 systemd[1]: Queued start job for default target multi-user.target. Mar 11 02:23:46.177785 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 11 02:23:46.177796 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 11 02:23:46.177807 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 11 02:23:46.177818 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 11 02:23:46.177950 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 11 02:23:46.177967 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 11 02:23:46.177979 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 11 02:23:46.177990 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 11 02:23:46.178001 systemd[1]: Created slice user.slice - User and Session Slice. Mar 11 02:23:46.178016 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 02:23:46.178027 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 02:23:46.178038 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 11 02:23:46.178048 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 11 02:23:46.178060 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 11 02:23:46.178070 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 02:23:46.178081 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 11 02:23:46.178092 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 02:23:46.178103 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 11 02:23:46.178116 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 02:23:46.178127 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 02:23:46.178138 systemd[1]: Reached target slices.target - Slice Units. Mar 11 02:23:46.178148 systemd[1]: Reached target swap.target - Swaps. Mar 11 02:23:46.178159 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 11 02:23:46.178170 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 11 02:23:46.178182 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 11 02:23:46.178193 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 11 02:23:46.178207 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 02:23:46.178218 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 02:23:46.178229 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 02:23:46.178239 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 11 02:23:46.178250 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 11 02:23:46.178265 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 11 02:23:46.178276 systemd[1]: Mounting media.mount - External Media Directory... Mar 11 02:23:46.178287 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:23:46.178298 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 11 02:23:46.178311 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 11 02:23:46.178321 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 11 02:23:46.178332 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 11 02:23:46.178343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 02:23:46.178354 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 02:23:46.178365 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 11 02:23:46.178376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 02:23:46.178386 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 02:23:46.178399 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 02:23:46.178410 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 11 02:23:46.178421 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 02:23:46.178479 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 11 02:23:46.178503 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 11 02:23:46.178517 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 11 02:23:46.178528 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 02:23:46.178538 kernel: fuse: init (API version 7.39) Mar 11 02:23:46.178553 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 02:23:46.178563 kernel: loop: module loaded Mar 11 02:23:46.178574 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 11 02:23:46.178585 kernel: ACPI: bus type drm_connector registered Mar 11 02:23:46.178615 systemd-journald[1169]: Collecting audit messages is disabled. Mar 11 02:23:46.178640 systemd-journald[1169]: Journal started Mar 11 02:23:46.178662 systemd-journald[1169]: Runtime Journal (/run/log/journal/1bb7d9c7c1734b31970cd488a1160c5b) is 6.0M, max 48.4M, 42.3M free. Mar 11 02:23:46.193953 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 11 02:23:46.207320 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 02:23:46.219924 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:23:46.226667 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 02:23:46.233961 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 11 02:23:46.240529 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 11 02:23:46.247524 systemd[1]: Mounted media.mount - External Media Directory. Mar 11 02:23:46.251972 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 11 02:23:46.256427 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 11 02:23:46.261307 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 11 02:23:46.265661 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 11 02:23:46.271064 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 02:23:46.276649 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 11 02:23:46.276981 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 11 02:23:46.282179 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 02:23:46.282408 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 02:23:46.287678 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 02:23:46.287968 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 02:23:46.292801 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 02:23:46.293116 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 02:23:46.298570 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 11 02:23:46.298801 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 11 02:23:46.304721 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 02:23:46.305497 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 02:23:46.310970 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 02:23:46.316211 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 11 02:23:46.322351 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 11 02:23:46.328334 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 02:23:46.347197 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 11 02:23:46.370228 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 11 02:23:46.376693 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 11 02:23:46.381123 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 11 02:23:46.382758 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 11 02:23:46.388626 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 11 02:23:46.393616 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 02:23:46.395385 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 11 02:23:46.400594 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 02:23:46.402674 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 02:23:46.412055 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 11 02:23:46.421021 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 11 02:23:46.426247 systemd-journald[1169]: Time spent on flushing to /var/log/journal/1bb7d9c7c1734b31970cd488a1160c5b is 14.321ms for 930 entries. Mar 11 02:23:46.426247 systemd-journald[1169]: System Journal (/var/log/journal/1bb7d9c7c1734b31970cd488a1160c5b) is 8.0M, max 195.6M, 187.6M free. Mar 11 02:23:46.458714 systemd-journald[1169]: Received client request to flush runtime journal. Mar 11 02:23:46.427784 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 11 02:23:46.434514 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 11 02:23:46.442647 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 11 02:23:46.455031 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 11 02:23:46.461102 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 11 02:23:46.470675 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:23:46.478819 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 11 02:23:46.490487 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Mar 11 02:23:46.490530 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Mar 11 02:23:46.500296 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 02:23:46.513988 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 11 02:23:46.552934 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 11 02:23:46.565184 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 02:23:46.592530 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Mar 11 02:23:46.592594 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Mar 11 02:23:46.600611 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 02:23:46.932643 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 11 02:23:46.953106 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 02:23:46.990018 systemd-udevd[1237]: Using default interface naming scheme 'v255'. Mar 11 02:23:47.027341 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 02:23:47.051436 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 02:23:47.075101 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 11 02:23:47.086333 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 11 02:23:47.108054 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1238) Mar 11 02:23:47.188814 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 11 02:23:47.183331 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 11 02:23:47.191488 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 11 02:23:47.218943 kernel: ACPI: button: Power Button [PWRF] Mar 11 02:23:47.231168 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 11 02:23:47.231507 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 11 02:23:47.231690 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 11 02:23:47.248005 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 11 02:23:47.298020 systemd-networkd[1248]: lo: Link UP Mar 11 02:23:47.298029 systemd-networkd[1248]: lo: Gained carrier Mar 11 02:23:47.300045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:23:47.305394 systemd-networkd[1248]: Enumeration completed Mar 11 02:23:47.305558 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 02:23:47.315373 systemd-networkd[1248]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:23:47.315381 systemd-networkd[1248]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 02:23:47.323936 kernel: mousedev: PS/2 mouse device common for all mice Mar 11 02:23:47.316713 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 11 02:23:47.325658 systemd-networkd[1248]: eth0: Link UP Mar 11 02:23:47.325719 systemd-networkd[1248]: eth0: Gained carrier Mar 11 02:23:47.325771 systemd-networkd[1248]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:23:47.454035 systemd-networkd[1248]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 11 02:23:47.481982 kernel: kvm_amd: TSC scaling supported Mar 11 02:23:47.482131 kernel: kvm_amd: Nested Virtualization enabled Mar 11 02:23:47.482174 kernel: kvm_amd: Nested Paging enabled Mar 11 02:23:47.482212 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 11 02:23:47.482296 kernel: kvm_amd: PMU virtualization is disabled Mar 11 02:23:47.571909 kernel: EDAC MC: Ver: 3.0.0 Mar 11 02:23:47.597267 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 11 02:23:47.723178 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 11 02:23:47.728772 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:23:47.744241 lvm[1281]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 11 02:23:47.785552 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 11 02:23:47.790607 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 02:23:47.808113 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 11 02:23:47.819114 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 11 02:23:47.865945 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 11 02:23:47.871767 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 11 02:23:47.877404 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 11 02:23:47.877437 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 02:23:47.882103 systemd[1]: Reached target machines.target - Containers. Mar 11 02:23:47.887963 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 11 02:23:47.912350 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 11 02:23:47.919542 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 11 02:23:47.924128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 02:23:47.925629 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 11 02:23:47.930751 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 11 02:23:47.938152 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 11 02:23:47.945626 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 11 02:23:47.953442 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 11 02:23:47.970990 kernel: loop0: detected capacity change from 0 to 142488 Mar 11 02:23:47.981640 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 11 02:23:47.983410 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 11 02:23:48.012077 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 11 02:23:48.041936 kernel: loop1: detected capacity change from 0 to 228704 Mar 11 02:23:48.089977 kernel: loop2: detected capacity change from 0 to 140768 Mar 11 02:23:48.145936 kernel: loop3: detected capacity change from 0 to 142488 Mar 11 02:23:48.176056 kernel: loop4: detected capacity change from 0 to 228704 Mar 11 02:23:48.194899 kernel: loop5: detected capacity change from 0 to 140768 Mar 11 02:23:48.218291 (sd-merge)[1306]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 11 02:23:48.219122 (sd-merge)[1306]: Merged extensions into '/usr'. Mar 11 02:23:48.223718 systemd[1]: Reloading requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Mar 11 02:23:48.223772 systemd[1]: Reloading... Mar 11 02:23:48.282907 zram_generator::config[1337]: No configuration found. Mar 11 02:23:48.335101 ldconfig[1291]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 11 02:23:48.450737 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:23:48.524124 systemd[1]: Reloading finished in 299 ms. Mar 11 02:23:48.554793 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 11 02:23:48.560406 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 11 02:23:48.587558 systemd[1]: Starting ensure-sysext.service... Mar 11 02:23:48.593640 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 02:23:48.603160 systemd[1]: Reloading requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Mar 11 02:23:48.603303 systemd[1]: Reloading... Mar 11 02:23:48.636587 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 11 02:23:48.637046 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 11 02:23:48.638072 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 11 02:23:48.638365 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Mar 11 02:23:48.638519 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Mar 11 02:23:48.642284 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 02:23:48.642325 systemd-tmpfiles[1380]: Skipping /boot Mar 11 02:23:48.654661 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 02:23:48.654676 systemd-tmpfiles[1380]: Skipping /boot Mar 11 02:23:48.684955 zram_generator::config[1409]: No configuration found. Mar 11 02:23:48.825699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:23:48.894529 systemd[1]: Reloading finished in 290 ms. Mar 11 02:23:48.930728 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 02:23:48.957593 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 11 02:23:48.965216 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 11 02:23:48.973107 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 11 02:23:48.983243 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 02:23:48.994300 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 11 02:23:49.015616 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:23:49.016230 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 02:23:49.020209 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 02:23:49.030267 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 02:23:49.037372 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 02:23:49.047607 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 02:23:49.054010 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 02:23:49.055247 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:23:49.057364 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 11 02:23:49.063777 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 02:23:49.064094 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 02:23:49.069630 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 02:23:49.069917 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 02:23:49.075089 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 02:23:49.075446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 02:23:49.079252 augenrules[1483]: No rules Mar 11 02:23:49.081897 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 11 02:23:49.087551 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 02:23:49.087998 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 02:23:49.095286 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 11 02:23:49.103533 systemd[1]: Finished ensure-sysext.service. Mar 11 02:23:49.114971 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 02:23:49.115240 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 02:23:49.121022 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 11 02:23:49.126404 systemd-resolved[1458]: Positive Trust Anchors: Mar 11 02:23:49.126449 systemd-resolved[1458]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 02:23:49.126516 systemd-resolved[1458]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 02:23:49.130619 systemd-resolved[1458]: Defaulting to hostname 'linux'. Mar 11 02:23:49.137038 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 11 02:23:49.140968 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 11 02:23:49.141506 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 02:23:49.146531 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 11 02:23:49.153794 systemd[1]: Reached target network.target - Network. Mar 11 02:23:49.157279 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 02:23:49.162563 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 11 02:23:49.175120 systemd-networkd[1248]: eth0: Gained IPv6LL Mar 11 02:23:49.179107 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 11 02:23:49.184079 systemd[1]: Reached target network-online.target - Network is Online. Mar 11 02:23:49.245631 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 11 02:23:49.871666 systemd-resolved[1458]: Clock change detected. Flushing caches. Mar 11 02:23:49.871703 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 11 02:23:49.871758 systemd-timesyncd[1501]: Initial clock synchronization to Wed 2026-03-11 02:23:49.871536 UTC. Mar 11 02:23:49.875162 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 02:23:49.879710 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 11 02:23:49.884620 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 11 02:23:49.889619 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 11 02:23:49.895036 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 11 02:23:49.895136 systemd[1]: Reached target paths.target - Path Units. Mar 11 02:23:49.899430 systemd[1]: Reached target time-set.target - System Time Set. Mar 11 02:23:49.904060 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 11 02:23:49.908790 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 11 02:23:49.914007 systemd[1]: Reached target timers.target - Timer Units. Mar 11 02:23:49.919535 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 11 02:23:49.926839 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 11 02:23:49.933440 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 11 02:23:49.939096 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 11 02:23:49.943575 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 02:23:49.948099 systemd[1]: Reached target basic.target - Basic System. Mar 11 02:23:49.951747 systemd[1]: System is tainted: cgroupsv1 Mar 11 02:23:49.951822 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 11 02:23:49.951850 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 11 02:23:49.953690 systemd[1]: Starting containerd.service - containerd container runtime... Mar 11 02:23:49.961239 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 11 02:23:49.968868 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 11 02:23:49.976052 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 11 02:23:49.984404 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 11 02:23:49.990276 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 11 02:23:49.994629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:23:50.002650 jq[1515]: false Mar 11 02:23:50.004227 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 11 02:23:50.008784 extend-filesystems[1516]: Found loop3 Mar 11 02:23:50.014577 extend-filesystems[1516]: Found loop4 Mar 11 02:23:50.014577 extend-filesystems[1516]: Found loop5 Mar 11 02:23:50.014577 extend-filesystems[1516]: Found sr0 Mar 11 02:23:50.014577 extend-filesystems[1516]: Found vda Mar 11 02:23:50.014577 extend-filesystems[1516]: Found vda1 Mar 11 02:23:50.014577 extend-filesystems[1516]: Found vda2 Mar 11 02:23:50.014577 extend-filesystems[1516]: Found vda3 Mar 11 02:23:50.014577 extend-filesystems[1516]: Found usr Mar 11 02:23:50.014577 extend-filesystems[1516]: Found vda4 Mar 11 02:23:50.014577 extend-filesystems[1516]: Found vda6 Mar 11 02:23:50.014577 extend-filesystems[1516]: Found vda7 Mar 11 02:23:50.014577 extend-filesystems[1516]: Found vda9 Mar 11 02:23:50.014577 extend-filesystems[1516]: Checking size of /dev/vda9 Mar 11 02:23:50.087747 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1245) Mar 11 02:23:50.057842 dbus-daemon[1513]: [system] SELinux support is enabled Mar 11 02:23:50.088247 extend-filesystems[1516]: Resized partition /dev/vda9 Mar 11 02:23:50.017302 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 11 02:23:50.099030 extend-filesystems[1532]: resize2fs 1.47.1 (20-May-2024) Mar 11 02:23:50.110859 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 11 02:23:50.040549 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 11 02:23:50.054528 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 11 02:23:50.073706 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 11 02:23:50.092910 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 11 02:23:50.105873 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 11 02:23:50.111655 systemd[1]: Starting update-engine.service - Update Engine... Mar 11 02:23:50.119408 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 11 02:23:50.130830 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 11 02:23:50.147637 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 11 02:23:50.148096 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 11 02:23:50.149773 systemd[1]: motdgen.service: Deactivated successfully. Mar 11 02:23:50.150412 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 11 02:23:50.164518 jq[1552]: true Mar 11 02:23:50.171745 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 11 02:23:50.177840 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 11 02:23:50.178179 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 11 02:23:50.184589 update_engine[1550]: I20260311 02:23:50.184026 1550 main.cc:92] Flatcar Update Engine starting Mar 11 02:23:50.186594 update_engine[1550]: I20260311 02:23:50.186556 1550 update_check_scheduler.cc:74] Next update check in 6m55s Mar 11 02:23:50.218415 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 11 02:23:50.221999 (ntainerd)[1560]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 11 02:23:50.227268 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 11 02:23:50.228713 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 11 02:23:50.248832 jq[1559]: true Mar 11 02:23:50.251190 systemd-logind[1545]: Watching system buttons on /dev/input/event1 (Power Button) Mar 11 02:23:50.253762 extend-filesystems[1532]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 11 02:23:50.253762 extend-filesystems[1532]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 11 02:23:50.253762 extend-filesystems[1532]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 11 02:23:50.251215 systemd-logind[1545]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 11 02:23:50.291468 extend-filesystems[1516]: Resized filesystem in /dev/vda9 Mar 11 02:23:50.258133 systemd-logind[1545]: New seat seat0. Mar 11 02:23:50.302506 tar[1557]: linux-amd64/LICENSE Mar 11 02:23:50.302506 tar[1557]: linux-amd64/helm Mar 11 02:23:50.265635 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 11 02:23:50.265927 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 11 02:23:50.277661 systemd[1]: Started update-engine.service - Update Engine. Mar 11 02:23:50.283198 systemd[1]: Started systemd-logind.service - User Login Management. Mar 11 02:23:50.293521 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 11 02:23:50.293743 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 11 02:23:50.293873 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 11 02:23:50.310725 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 11 02:23:50.310830 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 11 02:23:50.318277 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 11 02:23:50.326583 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 11 02:23:50.354714 bash[1597]: Updated "/home/core/.ssh/authorized_keys" Mar 11 02:23:50.361063 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 11 02:23:50.368060 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 11 02:23:50.393600 locksmithd[1593]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 11 02:23:50.488287 containerd[1560]: time="2026-03-11T02:23:50.487114144Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 11 02:23:50.514045 containerd[1560]: time="2026-03-11T02:23:50.513917126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:23:50.517441 containerd[1560]: time="2026-03-11T02:23:50.516885827Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:23:50.517441 containerd[1560]: time="2026-03-11T02:23:50.516912517Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 11 02:23:50.517441 containerd[1560]: time="2026-03-11T02:23:50.516926723Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 11 02:23:50.517441 containerd[1560]: time="2026-03-11T02:23:50.517152865Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 11 02:23:50.517441 containerd[1560]: time="2026-03-11T02:23:50.517168364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 11 02:23:50.517441 containerd[1560]: time="2026-03-11T02:23:50.517242282Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:23:50.517441 containerd[1560]: time="2026-03-11T02:23:50.517256027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:23:50.517610 containerd[1560]: time="2026-03-11T02:23:50.517584581Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:23:50.517636 containerd[1560]: time="2026-03-11T02:23:50.517609407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 11 02:23:50.517636 containerd[1560]: time="2026-03-11T02:23:50.517626589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:23:50.517666 containerd[1560]: time="2026-03-11T02:23:50.517638782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 11 02:23:50.517811 containerd[1560]: time="2026-03-11T02:23:50.517748497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:23:50.518156 containerd[1560]: time="2026-03-11T02:23:50.518097919Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:23:50.519387 containerd[1560]: time="2026-03-11T02:23:50.518279799Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:23:50.519387 containerd[1560]: time="2026-03-11T02:23:50.518294376Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 11 02:23:50.519387 containerd[1560]: time="2026-03-11T02:23:50.518455777Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 11 02:23:50.519387 containerd[1560]: time="2026-03-11T02:23:50.518508856Z" level=info msg="metadata content store policy set" policy=shared Mar 11 02:23:50.524221 containerd[1560]: time="2026-03-11T02:23:50.524168663Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 11 02:23:50.524221 containerd[1560]: time="2026-03-11T02:23:50.524218506Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 11 02:23:50.524221 containerd[1560]: time="2026-03-11T02:23:50.524239265Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 11 02:23:50.524523 containerd[1560]: time="2026-03-11T02:23:50.524253932Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 11 02:23:50.524523 containerd[1560]: time="2026-03-11T02:23:50.524267447Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 11 02:23:50.524523 containerd[1560]: time="2026-03-11T02:23:50.524468001Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 11 02:23:50.524771 containerd[1560]: time="2026-03-11T02:23:50.524716075Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 11 02:23:50.524924 containerd[1560]: time="2026-03-11T02:23:50.524857860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 11 02:23:50.524924 containerd[1560]: time="2026-03-11T02:23:50.524872888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 11 02:23:50.524924 containerd[1560]: time="2026-03-11T02:23:50.524884630Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 11 02:23:50.524924 containerd[1560]: time="2026-03-11T02:23:50.524895760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 11 02:23:50.524924 containerd[1560]: time="2026-03-11T02:23:50.524907001Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 11 02:23:50.524924 containerd[1560]: time="2026-03-11T02:23:50.524924453Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 11 02:23:50.525139 containerd[1560]: time="2026-03-11T02:23:50.524936376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 11 02:23:50.525139 containerd[1560]: time="2026-03-11T02:23:50.524948739Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 11 02:23:50.525139 containerd[1560]: time="2026-03-11T02:23:50.525011386Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 11 02:23:50.525139 containerd[1560]: time="2026-03-11T02:23:50.525022567Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 11 02:23:50.525139 containerd[1560]: time="2026-03-11T02:23:50.525032746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 11 02:23:50.525139 containerd[1560]: time="2026-03-11T02:23:50.525050148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525139 containerd[1560]: time="2026-03-11T02:23:50.525061870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525139 containerd[1560]: time="2026-03-11T02:23:50.525072360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525139 containerd[1560]: time="2026-03-11T02:23:50.525082298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525139 containerd[1560]: time="2026-03-11T02:23:50.525093219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525139 containerd[1560]: time="2026-03-11T02:23:50.525110601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525139 containerd[1560]: time="2026-03-11T02:23:50.525120870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525139 containerd[1560]: time="2026-03-11T02:23:50.525131470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525139 containerd[1560]: time="2026-03-11T02:23:50.525141489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525496 containerd[1560]: time="2026-03-11T02:23:50.525160995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525496 containerd[1560]: time="2026-03-11T02:23:50.525180151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525496 containerd[1560]: time="2026-03-11T02:23:50.525195920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525496 containerd[1560]: time="2026-03-11T02:23:50.525211770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525496 containerd[1560]: time="2026-03-11T02:23:50.525231928Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 11 02:23:50.525496 containerd[1560]: time="2026-03-11T02:23:50.525260020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525496 containerd[1560]: time="2026-03-11T02:23:50.525275659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525496 containerd[1560]: time="2026-03-11T02:23:50.525288763Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 11 02:23:50.525496 containerd[1560]: time="2026-03-11T02:23:50.525413156Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 11 02:23:50.525496 containerd[1560]: time="2026-03-11T02:23:50.525430879Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 11 02:23:50.525496 containerd[1560]: time="2026-03-11T02:23:50.525441198Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 11 02:23:50.525496 containerd[1560]: time="2026-03-11T02:23:50.525451327Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 11 02:23:50.525496 containerd[1560]: time="2026-03-11T02:23:50.525461586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.525689 containerd[1560]: time="2026-03-11T02:23:50.525472386Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 11 02:23:50.525689 containerd[1560]: time="2026-03-11T02:23:50.525481544Z" level=info msg="NRI interface is disabled by configuration." Mar 11 02:23:50.525689 containerd[1560]: time="2026-03-11T02:23:50.525490309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 11 02:23:50.526479 containerd[1560]: time="2026-03-11T02:23:50.525707846Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 11 02:23:50.526479 containerd[1560]: time="2026-03-11T02:23:50.525765984Z" level=info msg="Connect containerd service" Mar 11 02:23:50.526479 containerd[1560]: time="2026-03-11T02:23:50.525799587Z" level=info msg="using legacy CRI server" Mar 11 02:23:50.526479 containerd[1560]: time="2026-03-11T02:23:50.525806019Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 11 02:23:50.526479 containerd[1560]: time="2026-03-11T02:23:50.525906707Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 11 02:23:50.527830 containerd[1560]: time="2026-03-11T02:23:50.526926190Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 11 02:23:50.527830 containerd[1560]: time="2026-03-11T02:23:50.527190824Z" level=info msg="Start subscribing containerd event" Mar 11 02:23:50.527830 containerd[1560]: time="2026-03-11T02:23:50.527227653Z" level=info msg="Start recovering state" Mar 11 02:23:50.527830 containerd[1560]: time="2026-03-11T02:23:50.527277817Z" level=info msg="Start event monitor" Mar 11 02:23:50.527830 containerd[1560]: time="2026-03-11T02:23:50.527300589Z" level=info msg="Start snapshots syncer" Mar 11 02:23:50.527830 containerd[1560]: time="2026-03-11T02:23:50.527395957Z" level=info msg="Start cni network conf syncer for default" Mar 11 02:23:50.527830 containerd[1560]: time="2026-03-11T02:23:50.527411125Z" level=info msg="Start streaming server" Mar 11 02:23:50.527830 containerd[1560]: time="2026-03-11T02:23:50.527775285Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 11 02:23:50.527830 containerd[1560]: time="2026-03-11T02:23:50.527828113Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 11 02:23:50.530840 containerd[1560]: time="2026-03-11T02:23:50.529442667Z" level=info msg="containerd successfully booted in 0.043548s" Mar 11 02:23:50.529595 systemd[1]: Started containerd.service - containerd container runtime. Mar 11 02:23:50.686723 sshd_keygen[1551]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 11 02:23:50.715683 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 11 02:23:50.728582 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 11 02:23:50.742713 systemd[1]: issuegen.service: Deactivated successfully. Mar 11 02:23:50.743074 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 11 02:23:50.754579 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 11 02:23:50.766271 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 11 02:23:50.789738 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 11 02:23:50.795623 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 11 02:23:50.796822 tar[1557]: linux-amd64/README.md Mar 11 02:23:50.800955 systemd[1]: Reached target getty.target - Login Prompts. Mar 11 02:23:50.817184 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 11 02:23:51.135064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:23:51.140662 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 11 02:23:51.145124 systemd[1]: Startup finished in 9.794s (kernel) + 5.702s (userspace) = 15.496s. Mar 11 02:23:51.221911 (kubelet)[1646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 02:23:51.744289 kubelet[1646]: E0311 02:23:51.744146 1646 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 02:23:51.748120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 02:23:51.748568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 02:23:52.308849 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 11 02:23:52.320648 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:58796.service - OpenSSH per-connection server daemon (10.0.0.1:58796). Mar 11 02:23:52.376195 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 58796 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:23:52.378583 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:23:52.389085 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 11 02:23:52.399680 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 11 02:23:52.402044 systemd-logind[1545]: New session 1 of user core. Mar 11 02:23:52.416611 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 11 02:23:52.423840 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 11 02:23:52.430058 (systemd)[1666]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 11 02:23:52.558407 systemd[1666]: Queued start job for default target default.target. Mar 11 02:23:52.558862 systemd[1666]: Created slice app.slice - User Application Slice. Mar 11 02:23:52.558917 systemd[1666]: Reached target paths.target - Paths. Mar 11 02:23:52.558929 systemd[1666]: Reached target timers.target - Timers. Mar 11 02:23:52.568483 systemd[1666]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 11 02:23:52.576471 systemd[1666]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 11 02:23:52.576602 systemd[1666]: Reached target sockets.target - Sockets. Mar 11 02:23:52.576621 systemd[1666]: Reached target basic.target - Basic System. Mar 11 02:23:52.576662 systemd[1666]: Reached target default.target - Main User Target. Mar 11 02:23:52.576697 systemd[1666]: Startup finished in 137ms. Mar 11 02:23:52.576953 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 11 02:23:52.578826 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 11 02:23:52.644938 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:58812.service - OpenSSH per-connection server daemon (10.0.0.1:58812). Mar 11 02:23:52.683049 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 58812 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:23:52.684644 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:23:52.690447 systemd-logind[1545]: New session 2 of user core. Mar 11 02:23:52.705856 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 11 02:23:52.766043 sshd[1678]: pam_unix(sshd:session): session closed for user core Mar 11 02:23:52.772588 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:58816.service - OpenSSH per-connection server daemon (10.0.0.1:58816). Mar 11 02:23:52.773256 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:58812.service: Deactivated successfully. Mar 11 02:23:52.775891 systemd-logind[1545]: Session 2 logged out. Waiting for processes to exit. Mar 11 02:23:52.776598 systemd[1]: session-2.scope: Deactivated successfully. Mar 11 02:23:52.779127 systemd-logind[1545]: Removed session 2. Mar 11 02:23:52.803303 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 58816 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:23:52.805396 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:23:52.811303 systemd-logind[1545]: New session 3 of user core. Mar 11 02:23:52.823838 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 11 02:23:52.877399 sshd[1683]: pam_unix(sshd:session): session closed for user core Mar 11 02:23:52.886597 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:58822.service - OpenSSH per-connection server daemon (10.0.0.1:58822). Mar 11 02:23:52.887203 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:58816.service: Deactivated successfully. Mar 11 02:23:52.889850 systemd-logind[1545]: Session 3 logged out. Waiting for processes to exit. Mar 11 02:23:52.890432 systemd[1]: session-3.scope: Deactivated successfully. Mar 11 02:23:52.892768 systemd-logind[1545]: Removed session 3. Mar 11 02:23:52.916128 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 58822 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:23:52.917807 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:23:52.924094 systemd-logind[1545]: New session 4 of user core. Mar 11 02:23:52.934940 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 11 02:23:52.994202 sshd[1691]: pam_unix(sshd:session): session closed for user core Mar 11 02:23:53.003762 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:58826.service - OpenSSH per-connection server daemon (10.0.0.1:58826). Mar 11 02:23:53.004579 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:58822.service: Deactivated successfully. Mar 11 02:23:53.008291 systemd-logind[1545]: Session 4 logged out. Waiting for processes to exit. Mar 11 02:23:53.009139 systemd[1]: session-4.scope: Deactivated successfully. Mar 11 02:23:53.010797 systemd-logind[1545]: Removed session 4. Mar 11 02:23:53.035129 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 58826 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:23:53.036828 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:23:53.042945 systemd-logind[1545]: New session 5 of user core. Mar 11 02:23:53.057674 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 11 02:23:53.123926 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 11 02:23:53.124599 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:23:53.145204 sudo[1706]: pam_unix(sudo:session): session closed for user root Mar 11 02:23:53.147710 sshd[1699]: pam_unix(sshd:session): session closed for user core Mar 11 02:23:53.154656 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:58840.service - OpenSSH per-connection server daemon (10.0.0.1:58840). Mar 11 02:23:53.155418 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:58826.service: Deactivated successfully. Mar 11 02:23:53.159247 systemd[1]: session-5.scope: Deactivated successfully. Mar 11 02:23:53.160103 systemd-logind[1545]: Session 5 logged out. Waiting for processes to exit. Mar 11 02:23:53.162610 systemd-logind[1545]: Removed session 5. Mar 11 02:23:53.189656 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 58840 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:23:53.191128 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:23:53.196640 systemd-logind[1545]: New session 6 of user core. Mar 11 02:23:53.203618 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 11 02:23:53.260769 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 11 02:23:53.261183 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:23:53.266914 sudo[1716]: pam_unix(sudo:session): session closed for user root Mar 11 02:23:53.276487 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 11 02:23:53.276922 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:23:53.297681 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 11 02:23:53.300618 auditctl[1719]: No rules Mar 11 02:23:53.301082 systemd[1]: audit-rules.service: Deactivated successfully. Mar 11 02:23:53.301480 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 11 02:23:53.307607 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 11 02:23:53.346644 augenrules[1738]: No rules Mar 11 02:23:53.348520 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 11 02:23:53.349857 sudo[1715]: pam_unix(sudo:session): session closed for user root Mar 11 02:23:53.352431 sshd[1708]: pam_unix(sshd:session): session closed for user core Mar 11 02:23:53.368654 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:58846.service - OpenSSH per-connection server daemon (10.0.0.1:58846). Mar 11 02:23:53.369733 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:58840.service: Deactivated successfully. Mar 11 02:23:53.372183 systemd-logind[1545]: Session 6 logged out. Waiting for processes to exit. Mar 11 02:23:53.372772 systemd[1]: session-6.scope: Deactivated successfully. Mar 11 02:23:53.374592 systemd-logind[1545]: Removed session 6. Mar 11 02:23:53.396947 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 58846 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:23:53.398792 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:23:53.404148 systemd-logind[1545]: New session 7 of user core. Mar 11 02:23:53.413660 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 11 02:23:53.470752 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 11 02:23:53.471181 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:23:53.776580 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 11 02:23:53.780185 (dockerd)[1770]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 11 02:23:54.133903 dockerd[1770]: time="2026-03-11T02:23:54.133690991Z" level=info msg="Starting up" Mar 11 02:23:54.446739 dockerd[1770]: time="2026-03-11T02:23:54.446420502Z" level=info msg="Loading containers: start." Mar 11 02:23:54.646498 kernel: Initializing XFRM netlink socket Mar 11 02:23:54.786966 systemd-networkd[1248]: docker0: Link UP Mar 11 02:23:54.820433 dockerd[1770]: time="2026-03-11T02:23:54.820199547Z" level=info msg="Loading containers: done." Mar 11 02:23:54.839182 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1482427365-merged.mount: Deactivated successfully. Mar 11 02:23:54.842288 dockerd[1770]: time="2026-03-11T02:23:54.842188538Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 11 02:23:54.842494 dockerd[1770]: time="2026-03-11T02:23:54.842428546Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 11 02:23:54.842622 dockerd[1770]: time="2026-03-11T02:23:54.842562757Z" level=info msg="Daemon has completed initialization" Mar 11 02:23:54.908580 dockerd[1770]: time="2026-03-11T02:23:54.908408387Z" level=info msg="API listen on /run/docker.sock" Mar 11 02:23:54.908658 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 11 02:23:55.495487 containerd[1560]: time="2026-03-11T02:23:55.495431127Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 11 02:23:56.063843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount275303667.mount: Deactivated successfully. Mar 11 02:23:57.486610 containerd[1560]: time="2026-03-11T02:23:57.486464079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:23:57.487839 containerd[1560]: time="2026-03-11T02:23:57.487779331Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 11 02:23:57.489557 containerd[1560]: time="2026-03-11T02:23:57.489487429Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:23:57.494130 containerd[1560]: time="2026-03-11T02:23:57.494094979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:23:57.496296 containerd[1560]: time="2026-03-11T02:23:57.496085898Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 2.000606532s" Mar 11 02:23:57.496296 containerd[1560]: time="2026-03-11T02:23:57.496127917Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 11 02:23:57.497525 containerd[1560]: time="2026-03-11T02:23:57.497462966Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 11 02:23:58.959646 containerd[1560]: time="2026-03-11T02:23:58.959499477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:23:58.960781 containerd[1560]: time="2026-03-11T02:23:58.960738337Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 11 02:23:58.962553 containerd[1560]: time="2026-03-11T02:23:58.962470802Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:23:58.965785 containerd[1560]: time="2026-03-11T02:23:58.965700812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:23:58.966941 containerd[1560]: time="2026-03-11T02:23:58.966863499Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.469330553s" Mar 11 02:23:58.966941 containerd[1560]: time="2026-03-11T02:23:58.966933861Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 11 02:23:58.967859 containerd[1560]: time="2026-03-11T02:23:58.967799126Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 11 02:24:00.299277 containerd[1560]: time="2026-03-11T02:24:00.298942746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:00.300419 containerd[1560]: time="2026-03-11T02:24:00.300299634Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 11 02:24:00.301962 containerd[1560]: time="2026-03-11T02:24:00.301916461Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:00.306228 containerd[1560]: time="2026-03-11T02:24:00.306136939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:00.307497 containerd[1560]: time="2026-03-11T02:24:00.307253690Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.339383311s" Mar 11 02:24:00.307497 containerd[1560]: time="2026-03-11T02:24:00.307475103Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 11 02:24:00.308464 containerd[1560]: time="2026-03-11T02:24:00.308226075Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 11 02:24:01.354132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1818959898.mount: Deactivated successfully. Mar 11 02:24:01.860301 containerd[1560]: time="2026-03-11T02:24:01.860222283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:01.861451 containerd[1560]: time="2026-03-11T02:24:01.861379809Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 11 02:24:01.862723 containerd[1560]: time="2026-03-11T02:24:01.862640347Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:01.866447 containerd[1560]: time="2026-03-11T02:24:01.866255625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:01.867253 containerd[1560]: time="2026-03-11T02:24:01.867171855Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.558864649s" Mar 11 02:24:01.867253 containerd[1560]: time="2026-03-11T02:24:01.867233029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 11 02:24:01.867868 containerd[1560]: time="2026-03-11T02:24:01.867841535Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 11 02:24:01.998759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 11 02:24:02.009573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:24:02.189039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:24:02.210116 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 02:24:02.288895 kubelet[2004]: E0311 02:24:02.288792 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 02:24:02.294881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 02:24:02.295235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 02:24:02.497544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1875789197.mount: Deactivated successfully. Mar 11 02:24:03.595298 containerd[1560]: time="2026-03-11T02:24:03.595144046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:03.596523 containerd[1560]: time="2026-03-11T02:24:03.596396771Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 11 02:24:03.597816 containerd[1560]: time="2026-03-11T02:24:03.597743678Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:03.602005 containerd[1560]: time="2026-03-11T02:24:03.601855239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:03.603411 containerd[1560]: time="2026-03-11T02:24:03.603288800Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.735343209s" Mar 11 02:24:03.603465 containerd[1560]: time="2026-03-11T02:24:03.603418812Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 11 02:24:03.604270 containerd[1560]: time="2026-03-11T02:24:03.604127424Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 11 02:24:04.042583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3914195707.mount: Deactivated successfully. Mar 11 02:24:04.049787 containerd[1560]: time="2026-03-11T02:24:04.049704017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:04.050895 containerd[1560]: time="2026-03-11T02:24:04.050759956Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 11 02:24:04.052739 containerd[1560]: time="2026-03-11T02:24:04.052462402Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:04.055948 containerd[1560]: time="2026-03-11T02:24:04.055851252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:04.057193 containerd[1560]: time="2026-03-11T02:24:04.057121503Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 452.918107ms" Mar 11 02:24:04.057193 containerd[1560]: time="2026-03-11T02:24:04.057177448Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 11 02:24:04.058410 containerd[1560]: time="2026-03-11T02:24:04.058276929Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 11 02:24:04.529651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2375110611.mount: Deactivated successfully. Mar 11 02:24:05.827905 containerd[1560]: time="2026-03-11T02:24:05.827680182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:05.829436 containerd[1560]: time="2026-03-11T02:24:05.829207524Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 11 02:24:05.830976 containerd[1560]: time="2026-03-11T02:24:05.830910695Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:05.835444 containerd[1560]: time="2026-03-11T02:24:05.835395704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:05.836808 containerd[1560]: time="2026-03-11T02:24:05.836708406Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.778219831s" Mar 11 02:24:05.836808 containerd[1560]: time="2026-03-11T02:24:05.836791271Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 11 02:24:09.899272 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:24:09.916659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:24:09.946214 systemd[1]: Reloading requested from client PID 2166 ('systemctl') (unit session-7.scope)... Mar 11 02:24:09.946259 systemd[1]: Reloading... Mar 11 02:24:10.036440 zram_generator::config[2205]: No configuration found. Mar 11 02:24:10.173436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:24:10.247858 systemd[1]: Reloading finished in 301 ms. Mar 11 02:24:10.305079 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 11 02:24:10.305265 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 11 02:24:10.305725 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:24:10.314506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:24:10.488525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:24:10.497026 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 02:24:10.565926 kubelet[2265]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 02:24:10.565926 kubelet[2265]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 11 02:24:10.565926 kubelet[2265]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 02:24:10.566636 kubelet[2265]: I0311 02:24:10.566079 2265 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 11 02:24:11.451589 kubelet[2265]: I0311 02:24:11.451227 2265 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 11 02:24:11.451589 kubelet[2265]: I0311 02:24:11.451281 2265 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 02:24:11.451589 kubelet[2265]: I0311 02:24:11.451535 2265 server.go:956] "Client rotation is on, will bootstrap in background" Mar 11 02:24:11.478093 kubelet[2265]: I0311 02:24:11.477815 2265 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 02:24:11.479010 kubelet[2265]: E0311 02:24:11.478753 2265 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 11 02:24:11.483822 kubelet[2265]: E0311 02:24:11.483753 2265 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 11 02:24:11.483875 kubelet[2265]: I0311 02:24:11.483830 2265 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 11 02:24:11.493460 kubelet[2265]: I0311 02:24:11.493229 2265 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 11 02:24:11.494572 kubelet[2265]: I0311 02:24:11.494460 2265 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 02:24:11.494679 kubelet[2265]: I0311 02:24:11.494521 2265 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 11 02:24:11.494805 kubelet[2265]: I0311 02:24:11.494685 2265 topology_manager.go:138] "Creating topology manager with none policy" Mar 11 02:24:11.494805 kubelet[2265]: I0311 02:24:11.494694 2265 container_manager_linux.go:303] "Creating device plugin manager" Mar 11 02:24:11.494855 kubelet[2265]: I0311 02:24:11.494814 2265 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:24:11.498904 kubelet[2265]: I0311 02:24:11.498778 2265 kubelet.go:480] "Attempting to sync node with API server" Mar 11 02:24:11.498904 kubelet[2265]: I0311 02:24:11.498829 2265 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 02:24:11.498904 kubelet[2265]: I0311 02:24:11.498853 2265 kubelet.go:386] "Adding apiserver pod source" Mar 11 02:24:11.500781 kubelet[2265]: I0311 02:24:11.500694 2265 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 02:24:11.503523 kubelet[2265]: I0311 02:24:11.503482 2265 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 11 02:24:11.504040 kubelet[2265]: I0311 02:24:11.503958 2265 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 02:24:11.504604 kubelet[2265]: E0311 02:24:11.504487 2265 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 11 02:24:11.504604 kubelet[2265]: E0311 02:24:11.504583 2265 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 11 02:24:11.505595 kubelet[2265]: W0311 02:24:11.504797 2265 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 11 02:24:11.510094 kubelet[2265]: I0311 02:24:11.510034 2265 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 11 02:24:11.510196 kubelet[2265]: I0311 02:24:11.510160 2265 server.go:1289] "Started kubelet" Mar 11 02:24:11.510840 kubelet[2265]: I0311 02:24:11.510753 2265 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 02:24:11.511201 kubelet[2265]: I0311 02:24:11.510218 2265 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 02:24:11.511243 kubelet[2265]: I0311 02:24:11.511219 2265 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 02:24:11.513865 kubelet[2265]: I0311 02:24:11.513432 2265 server.go:317] "Adding debug handlers to kubelet server" Mar 11 02:24:11.517410 kubelet[2265]: I0311 02:24:11.514820 2265 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 11 02:24:11.517410 kubelet[2265]: E0311 02:24:11.513687 2265 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189ba83b6e743f12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-11 02:24:11.510071058 +0000 UTC m=+1.004244574,LastTimestamp:2026-03-11 02:24:11.510071058 +0000 UTC m=+1.004244574,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 11 02:24:11.517410 kubelet[2265]: I0311 02:24:11.515927 2265 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 11 02:24:11.517410 kubelet[2265]: I0311 02:24:11.515987 2265 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 11 02:24:11.517410 kubelet[2265]: I0311 02:24:11.516158 2265 reconciler.go:26] "Reconciler: start to sync state" Mar 11 02:24:11.517410 kubelet[2265]: E0311 02:24:11.516750 2265 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 11 02:24:11.517410 kubelet[2265]: E0311 02:24:11.516911 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:24:11.517410 kubelet[2265]: I0311 02:24:11.516944 2265 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 02:24:11.517672 kubelet[2265]: E0311 02:24:11.517052 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" Mar 11 02:24:11.521238 kubelet[2265]: I0311 02:24:11.520679 2265 factory.go:223] Registration of the systemd container factory successfully Mar 11 02:24:11.521238 kubelet[2265]: I0311 02:24:11.520863 2265 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 02:24:11.522155 kubelet[2265]: E0311 02:24:11.521961 2265 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 02:24:11.523642 kubelet[2265]: I0311 02:24:11.523267 2265 factory.go:223] Registration of the containerd container factory successfully Mar 11 02:24:11.561781 kubelet[2265]: I0311 02:24:11.561698 2265 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 11 02:24:11.561781 kubelet[2265]: I0311 02:24:11.561712 2265 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 11 02:24:11.561781 kubelet[2265]: I0311 02:24:11.561728 2265 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:24:11.564423 kubelet[2265]: I0311 02:24:11.564281 2265 policy_none.go:49] "None policy: Start" Mar 11 02:24:11.564481 kubelet[2265]: I0311 02:24:11.564430 2265 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 11 02:24:11.564481 kubelet[2265]: I0311 02:24:11.564447 2265 state_mem.go:35] "Initializing new in-memory state store" Mar 11 02:24:11.565623 kubelet[2265]: I0311 02:24:11.565532 2265 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 11 02:24:11.569471 kubelet[2265]: I0311 02:24:11.569235 2265 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 11 02:24:11.569471 kubelet[2265]: I0311 02:24:11.569456 2265 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 11 02:24:11.570427 kubelet[2265]: I0311 02:24:11.569484 2265 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 02:24:11.570427 kubelet[2265]: I0311 02:24:11.569494 2265 kubelet.go:2436] "Starting kubelet main sync loop" Mar 11 02:24:11.570427 kubelet[2265]: E0311 02:24:11.569550 2265 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 02:24:11.573399 kubelet[2265]: E0311 02:24:11.570598 2265 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 11 02:24:11.574606 kubelet[2265]: E0311 02:24:11.574541 2265 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 02:24:11.574963 kubelet[2265]: I0311 02:24:11.574872 2265 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 11 02:24:11.574963 kubelet[2265]: I0311 02:24:11.574935 2265 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 02:24:11.575680 kubelet[2265]: I0311 02:24:11.575569 2265 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 11 02:24:11.578450 kubelet[2265]: E0311 02:24:11.578429 2265 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 02:24:11.578558 kubelet[2265]: E0311 02:24:11.578542 2265 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 11 02:24:11.676488 kubelet[2265]: I0311 02:24:11.676407 2265 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:24:11.677847 kubelet[2265]: E0311 02:24:11.677569 2265 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Mar 11 02:24:11.681759 kubelet[2265]: E0311 02:24:11.681569 2265 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:11.685097 kubelet[2265]: E0311 02:24:11.685000 2265 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:11.689541 kubelet[2265]: E0311 02:24:11.689298 2265 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:11.717678 kubelet[2265]: I0311 02:24:11.717406 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:11.717678 kubelet[2265]: I0311 02:24:11.717461 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:11.717678 kubelet[2265]: I0311 02:24:11.717479 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 11 02:24:11.717678 kubelet[2265]: I0311 02:24:11.717492 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b09405a13e988e25ff3ffc583ed89a5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b09405a13e988e25ff3ffc583ed89a5\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:11.717678 kubelet[2265]: I0311 02:24:11.717505 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b09405a13e988e25ff3ffc583ed89a5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b09405a13e988e25ff3ffc583ed89a5\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:11.717859 kubelet[2265]: I0311 02:24:11.717520 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b09405a13e988e25ff3ffc583ed89a5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7b09405a13e988e25ff3ffc583ed89a5\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:11.717859 kubelet[2265]: I0311 02:24:11.717533 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:11.717859 kubelet[2265]: I0311 02:24:11.717544 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:11.717859 kubelet[2265]: I0311 02:24:11.717557 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:11.717939 kubelet[2265]: E0311 02:24:11.717861 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" Mar 11 02:24:11.880197 kubelet[2265]: I0311 02:24:11.879956 2265 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:24:11.880698 kubelet[2265]: E0311 02:24:11.880576 2265 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Mar 11 02:24:11.982555 kubelet[2265]: E0311 02:24:11.982257 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:11.983462 containerd[1560]: time="2026-03-11T02:24:11.983204465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7b09405a13e988e25ff3ffc583ed89a5,Namespace:kube-system,Attempt:0,}" Mar 11 02:24:11.986851 kubelet[2265]: E0311 02:24:11.986732 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:11.987611 containerd[1560]: time="2026-03-11T02:24:11.987499687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 11 02:24:11.990678 kubelet[2265]: E0311 02:24:11.990448 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:11.991062 containerd[1560]: time="2026-03-11T02:24:11.990990175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 11 02:24:12.118982 kubelet[2265]: E0311 02:24:12.118896 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" Mar 11 02:24:12.283109 kubelet[2265]: I0311 02:24:12.282867 2265 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:24:12.283256 kubelet[2265]: E0311 02:24:12.283222 2265 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Mar 11 02:24:12.386666 kubelet[2265]: E0311 02:24:12.386526 2265 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 11 02:24:12.419665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount378935399.mount: Deactivated successfully. Mar 11 02:24:12.430720 containerd[1560]: time="2026-03-11T02:24:12.430628356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:24:12.434876 containerd[1560]: time="2026-03-11T02:24:12.434834597Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 11 02:24:12.436289 containerd[1560]: time="2026-03-11T02:24:12.436090659Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:24:12.437484 containerd[1560]: time="2026-03-11T02:24:12.437404861Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 11 02:24:12.438515 containerd[1560]: time="2026-03-11T02:24:12.438448183Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:24:12.439875 containerd[1560]: time="2026-03-11T02:24:12.439794254Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:24:12.441176 containerd[1560]: time="2026-03-11T02:24:12.441083941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 11 02:24:12.443100 containerd[1560]: time="2026-03-11T02:24:12.443023063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:24:12.445438 containerd[1560]: time="2026-03-11T02:24:12.445277399Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 454.18815ms" Mar 11 02:24:12.447044 containerd[1560]: time="2026-03-11T02:24:12.446910103Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 459.308957ms" Mar 11 02:24:12.449764 containerd[1560]: time="2026-03-11T02:24:12.449618657Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 466.338981ms" Mar 11 02:24:12.457639 kubelet[2265]: E0311 02:24:12.457511 2265 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 11 02:24:12.482446 kubelet[2265]: E0311 02:24:12.482267 2265 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 11 02:24:12.556548 containerd[1560]: time="2026-03-11T02:24:12.555304172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:12.556548 containerd[1560]: time="2026-03-11T02:24:12.555497944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:12.556548 containerd[1560]: time="2026-03-11T02:24:12.555524283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:12.556548 containerd[1560]: time="2026-03-11T02:24:12.555617477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:12.558783 containerd[1560]: time="2026-03-11T02:24:12.558426387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:12.558783 containerd[1560]: time="2026-03-11T02:24:12.558483524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:12.558783 containerd[1560]: time="2026-03-11T02:24:12.558506836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:12.558783 containerd[1560]: time="2026-03-11T02:24:12.558593869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:12.562507 containerd[1560]: time="2026-03-11T02:24:12.560688976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:12.562507 containerd[1560]: time="2026-03-11T02:24:12.560728238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:12.562507 containerd[1560]: time="2026-03-11T02:24:12.560741363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:12.562507 containerd[1560]: time="2026-03-11T02:24:12.560810893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:12.634005 containerd[1560]: time="2026-03-11T02:24:12.633891705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7b09405a13e988e25ff3ffc583ed89a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5132dc2c3823f44effcea289ae2ced714136346f4343e5cfe9ca79ca6a0ced93\"" Mar 11 02:24:12.636435 kubelet[2265]: E0311 02:24:12.636015 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:12.642770 kubelet[2265]: E0311 02:24:12.641431 2265 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 11 02:24:12.644632 containerd[1560]: time="2026-03-11T02:24:12.644605202Z" level=info msg="CreateContainer within sandbox \"5132dc2c3823f44effcea289ae2ced714136346f4343e5cfe9ca79ca6a0ced93\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 11 02:24:12.654279 containerd[1560]: time="2026-03-11T02:24:12.654099443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"50690e84187ff95a822e13257c7824cca78aa0e54bf52c9f66015b0ca9d24008\"" Mar 11 02:24:12.655577 kubelet[2265]: E0311 02:24:12.655542 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:12.656860 containerd[1560]: time="2026-03-11T02:24:12.656750512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"041eec314dd8dae2f6e7d6d1067ac9e0c4b8a3cfa52fdf5d5ac858b9d1f6c648\"" Mar 11 02:24:12.657568 kubelet[2265]: E0311 02:24:12.657418 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:12.660898 containerd[1560]: time="2026-03-11T02:24:12.660806841Z" level=info msg="CreateContainer within sandbox \"50690e84187ff95a822e13257c7824cca78aa0e54bf52c9f66015b0ca9d24008\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 11 02:24:12.664851 containerd[1560]: time="2026-03-11T02:24:12.664758985Z" level=info msg="CreateContainer within sandbox \"041eec314dd8dae2f6e7d6d1067ac9e0c4b8a3cfa52fdf5d5ac858b9d1f6c648\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 11 02:24:12.676091 containerd[1560]: time="2026-03-11T02:24:12.676066185Z" level=info msg="CreateContainer within sandbox \"5132dc2c3823f44effcea289ae2ced714136346f4343e5cfe9ca79ca6a0ced93\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4facf9adcf3989a13f949adcd702a91b3602268b6288182b2d5cd84c6dedfdec\"" Mar 11 02:24:12.677015 containerd[1560]: time="2026-03-11T02:24:12.676936740Z" level=info msg="StartContainer for \"4facf9adcf3989a13f949adcd702a91b3602268b6288182b2d5cd84c6dedfdec\"" Mar 11 02:24:12.690544 containerd[1560]: time="2026-03-11T02:24:12.690451729Z" level=info msg="CreateContainer within sandbox \"041eec314dd8dae2f6e7d6d1067ac9e0c4b8a3cfa52fdf5d5ac858b9d1f6c648\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9c4ff0748b52d410d343fa4f059d4ff6e86a181274add75a50457530ac3ef227\"" Mar 11 02:24:12.691571 containerd[1560]: time="2026-03-11T02:24:12.690842321Z" level=info msg="StartContainer for \"9c4ff0748b52d410d343fa4f059d4ff6e86a181274add75a50457530ac3ef227\"" Mar 11 02:24:12.694180 containerd[1560]: time="2026-03-11T02:24:12.694010171Z" level=info msg="CreateContainer within sandbox \"50690e84187ff95a822e13257c7824cca78aa0e54bf52c9f66015b0ca9d24008\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7d97674bada3975556aae3452e675db22021e5f9a56dca2f5c8dda9f5397ca0d\"" Mar 11 02:24:12.694797 containerd[1560]: time="2026-03-11T02:24:12.694599718Z" level=info msg="StartContainer for \"7d97674bada3975556aae3452e675db22021e5f9a56dca2f5c8dda9f5397ca0d\"" Mar 11 02:24:12.777774 containerd[1560]: time="2026-03-11T02:24:12.777702277Z" level=info msg="StartContainer for \"4facf9adcf3989a13f949adcd702a91b3602268b6288182b2d5cd84c6dedfdec\" returns successfully" Mar 11 02:24:12.830899 containerd[1560]: time="2026-03-11T02:24:12.830105417Z" level=info msg="StartContainer for \"9c4ff0748b52d410d343fa4f059d4ff6e86a181274add75a50457530ac3ef227\" returns successfully" Mar 11 02:24:12.838192 containerd[1560]: time="2026-03-11T02:24:12.838058634Z" level=info msg="StartContainer for \"7d97674bada3975556aae3452e675db22021e5f9a56dca2f5c8dda9f5397ca0d\" returns successfully" Mar 11 02:24:13.088790 kubelet[2265]: I0311 02:24:13.085535 2265 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:24:13.582535 kubelet[2265]: E0311 02:24:13.582076 2265 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:13.582535 kubelet[2265]: E0311 02:24:13.582286 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:13.585938 kubelet[2265]: E0311 02:24:13.585841 2265 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:13.586226 kubelet[2265]: E0311 02:24:13.586015 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:13.589366 kubelet[2265]: E0311 02:24:13.589258 2265 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:13.589603 kubelet[2265]: E0311 02:24:13.589522 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:14.427939 kubelet[2265]: E0311 02:24:14.427866 2265 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 11 02:24:14.531206 kubelet[2265]: I0311 02:24:14.530506 2265 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 11 02:24:14.531206 kubelet[2265]: E0311 02:24:14.530549 2265 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 11 02:24:14.547223 kubelet[2265]: E0311 02:24:14.547018 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:24:14.591450 kubelet[2265]: E0311 02:24:14.591392 2265 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:14.591450 kubelet[2265]: E0311 02:24:14.591305 2265 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:24:14.591619 kubelet[2265]: E0311 02:24:14.591542 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:14.591619 kubelet[2265]: E0311 02:24:14.591543 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:14.648097 kubelet[2265]: E0311 02:24:14.647905 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:24:14.748999 kubelet[2265]: E0311 02:24:14.748825 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:24:14.850475 kubelet[2265]: E0311 02:24:14.850265 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:24:14.950896 kubelet[2265]: E0311 02:24:14.950830 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:24:15.051870 kubelet[2265]: E0311 02:24:15.051745 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:24:15.152026 kubelet[2265]: E0311 02:24:15.151954 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:24:15.253048 kubelet[2265]: E0311 02:24:15.252933 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:24:15.354000 kubelet[2265]: E0311 02:24:15.353774 2265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:24:15.505642 kubelet[2265]: I0311 02:24:15.505465 2265 apiserver.go:52] "Watching apiserver" Mar 11 02:24:15.516661 kubelet[2265]: I0311 02:24:15.516625 2265 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 11 02:24:15.518069 kubelet[2265]: I0311 02:24:15.517912 2265 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:15.535736 kubelet[2265]: I0311 02:24:15.535468 2265 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:15.546813 kubelet[2265]: I0311 02:24:15.546733 2265 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 11 02:24:15.548421 kubelet[2265]: E0311 02:24:15.548104 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:15.592892 kubelet[2265]: I0311 02:24:15.592673 2265 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 11 02:24:15.593008 kubelet[2265]: E0311 02:24:15.592988 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:15.605401 kubelet[2265]: E0311 02:24:15.604970 2265 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 11 02:24:15.605401 kubelet[2265]: E0311 02:24:15.605262 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:16.596096 kubelet[2265]: E0311 02:24:16.595832 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:16.951746 systemd[1]: Reloading requested from client PID 2551 ('systemctl') (unit session-7.scope)... Mar 11 02:24:16.951812 systemd[1]: Reloading... Mar 11 02:24:17.039509 zram_generator::config[2593]: No configuration found. Mar 11 02:24:17.160297 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:24:17.272801 systemd[1]: Reloading finished in 320 ms. Mar 11 02:24:17.318576 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:24:17.335966 systemd[1]: kubelet.service: Deactivated successfully. Mar 11 02:24:17.336502 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:24:17.347612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:24:17.559565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:24:17.574942 (kubelet)[2645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 02:24:17.656216 kubelet[2645]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 02:24:17.656216 kubelet[2645]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 11 02:24:17.656216 kubelet[2645]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 02:24:17.656793 kubelet[2645]: I0311 02:24:17.656211 2645 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 11 02:24:17.667461 kubelet[2645]: I0311 02:24:17.667408 2645 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 11 02:24:17.667461 kubelet[2645]: I0311 02:24:17.667453 2645 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 02:24:17.667720 kubelet[2645]: I0311 02:24:17.667644 2645 server.go:956] "Client rotation is on, will bootstrap in background" Mar 11 02:24:17.668963 kubelet[2645]: I0311 02:24:17.668901 2645 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 11 02:24:17.672666 kubelet[2645]: I0311 02:24:17.672413 2645 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 02:24:17.681504 kubelet[2645]: E0311 02:24:17.680989 2645 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 11 02:24:17.681504 kubelet[2645]: I0311 02:24:17.681454 2645 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 11 02:24:17.691258 kubelet[2645]: I0311 02:24:17.691127 2645 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 11 02:24:17.692228 kubelet[2645]: I0311 02:24:17.692041 2645 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 02:24:17.692564 kubelet[2645]: I0311 02:24:17.692120 2645 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 11 02:24:17.692564 kubelet[2645]: I0311 02:24:17.692557 2645 topology_manager.go:138] "Creating topology manager with none policy" Mar 11 02:24:17.692867 kubelet[2645]: I0311 02:24:17.692573 2645 container_manager_linux.go:303] "Creating device plugin manager" Mar 11 02:24:17.692867 kubelet[2645]: I0311 02:24:17.692631 2645 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:24:17.692943 kubelet[2645]: I0311 02:24:17.692930 2645 kubelet.go:480] "Attempting to sync node with API server" Mar 11 02:24:17.692984 kubelet[2645]: I0311 02:24:17.692950 2645 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 02:24:17.692984 kubelet[2645]: I0311 02:24:17.692982 2645 kubelet.go:386] "Adding apiserver pod source" Mar 11 02:24:17.693041 kubelet[2645]: I0311 02:24:17.693002 2645 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 02:24:17.696284 kubelet[2645]: I0311 02:24:17.695947 2645 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 11 02:24:17.697703 kubelet[2645]: I0311 02:24:17.697603 2645 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 02:24:17.709117 kubelet[2645]: I0311 02:24:17.706273 2645 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 11 02:24:17.709117 kubelet[2645]: I0311 02:24:17.706567 2645 server.go:1289] "Started kubelet" Mar 11 02:24:17.709117 kubelet[2645]: I0311 02:24:17.707472 2645 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 02:24:17.709117 kubelet[2645]: I0311 02:24:17.707489 2645 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 02:24:17.709117 kubelet[2645]: I0311 02:24:17.708623 2645 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 02:24:17.709948 kubelet[2645]: I0311 02:24:17.709860 2645 server.go:317] "Adding debug handlers to kubelet server" Mar 11 02:24:17.719689 kubelet[2645]: I0311 02:24:17.719267 2645 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 11 02:24:17.725470 kubelet[2645]: I0311 02:24:17.719857 2645 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 02:24:17.727257 kubelet[2645]: I0311 02:24:17.725589 2645 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 11 02:24:17.728567 kubelet[2645]: I0311 02:24:17.725629 2645 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 11 02:24:17.728567 kubelet[2645]: I0311 02:24:17.727744 2645 reconciler.go:26] "Reconciler: start to sync state" Mar 11 02:24:17.732470 kubelet[2645]: E0311 02:24:17.732268 2645 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 02:24:17.733656 kubelet[2645]: I0311 02:24:17.733635 2645 factory.go:223] Registration of the systemd container factory successfully Mar 11 02:24:17.734012 kubelet[2645]: I0311 02:24:17.733989 2645 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 02:24:17.739105 kubelet[2645]: I0311 02:24:17.739007 2645 factory.go:223] Registration of the containerd container factory successfully Mar 11 02:24:17.764534 kubelet[2645]: I0311 02:24:17.764383 2645 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 11 02:24:17.768545 kubelet[2645]: I0311 02:24:17.768437 2645 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 11 02:24:17.768545 kubelet[2645]: I0311 02:24:17.768520 2645 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 11 02:24:17.768698 kubelet[2645]: I0311 02:24:17.768549 2645 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 02:24:17.768698 kubelet[2645]: I0311 02:24:17.768563 2645 kubelet.go:2436] "Starting kubelet main sync loop" Mar 11 02:24:17.768698 kubelet[2645]: E0311 02:24:17.768621 2645 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 02:24:17.819891 kubelet[2645]: I0311 02:24:17.819451 2645 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 11 02:24:17.819891 kubelet[2645]: I0311 02:24:17.819470 2645 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 11 02:24:17.819891 kubelet[2645]: I0311 02:24:17.819488 2645 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:24:17.819891 kubelet[2645]: I0311 02:24:17.819607 2645 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 11 02:24:17.819891 kubelet[2645]: I0311 02:24:17.819617 2645 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 11 02:24:17.819891 kubelet[2645]: I0311 02:24:17.819638 2645 policy_none.go:49] "None policy: Start" Mar 11 02:24:17.819891 kubelet[2645]: I0311 02:24:17.819648 2645 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 11 02:24:17.819891 kubelet[2645]: I0311 02:24:17.819657 2645 state_mem.go:35] "Initializing new in-memory state store" Mar 11 02:24:17.819891 kubelet[2645]: I0311 02:24:17.819729 2645 state_mem.go:75] "Updated machine memory state" Mar 11 02:24:17.821443 kubelet[2645]: E0311 02:24:17.821284 2645 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 02:24:17.822787 kubelet[2645]: I0311 02:24:17.822595 2645 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 11 02:24:17.822787 kubelet[2645]: I0311 02:24:17.822644 2645 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 02:24:17.823744 kubelet[2645]: I0311 02:24:17.822926 2645 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 11 02:24:17.824091 kubelet[2645]: E0311 02:24:17.824025 2645 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 02:24:17.870012 kubelet[2645]: I0311 02:24:17.869834 2645 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:17.870012 kubelet[2645]: I0311 02:24:17.869943 2645 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 11 02:24:17.872061 kubelet[2645]: I0311 02:24:17.871451 2645 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:17.887013 kubelet[2645]: E0311 02:24:17.886632 2645 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 11 02:24:17.887748 kubelet[2645]: E0311 02:24:17.887474 2645 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:17.889058 kubelet[2645]: E0311 02:24:17.888742 2645 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:17.929401 kubelet[2645]: I0311 02:24:17.929122 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b09405a13e988e25ff3ffc583ed89a5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b09405a13e988e25ff3ffc583ed89a5\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:17.929401 kubelet[2645]: I0311 02:24:17.929258 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:17.929401 kubelet[2645]: I0311 02:24:17.929293 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b09405a13e988e25ff3ffc583ed89a5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7b09405a13e988e25ff3ffc583ed89a5\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:17.929633 kubelet[2645]: I0311 02:24:17.929415 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:17.929633 kubelet[2645]: I0311 02:24:17.929448 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:17.929633 kubelet[2645]: I0311 02:24:17.929478 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:17.929633 kubelet[2645]: I0311 02:24:17.929501 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:24:17.929633 kubelet[2645]: I0311 02:24:17.929522 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 11 02:24:17.929802 kubelet[2645]: I0311 02:24:17.929545 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b09405a13e988e25ff3ffc583ed89a5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b09405a13e988e25ff3ffc583ed89a5\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:17.937743 kubelet[2645]: I0311 02:24:17.937686 2645 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:24:17.953498 kubelet[2645]: I0311 02:24:17.952929 2645 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 11 02:24:17.953498 kubelet[2645]: I0311 02:24:17.953005 2645 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 11 02:24:18.187734 kubelet[2645]: E0311 02:24:18.187579 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:18.188790 kubelet[2645]: E0311 02:24:18.188297 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:18.189782 kubelet[2645]: E0311 02:24:18.189722 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:18.693895 kubelet[2645]: I0311 02:24:18.693743 2645 apiserver.go:52] "Watching apiserver" Mar 11 02:24:18.728971 kubelet[2645]: I0311 02:24:18.728791 2645 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 11 02:24:18.800431 kubelet[2645]: I0311 02:24:18.793844 2645 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:18.800431 kubelet[2645]: E0311 02:24:18.794947 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:18.800431 kubelet[2645]: E0311 02:24:18.793898 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:18.847271 kubelet[2645]: E0311 02:24:18.847053 2645 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 11 02:24:18.849975 kubelet[2645]: E0311 02:24:18.847477 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:18.890026 kubelet[2645]: I0311 02:24:18.889501 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.889486468 podStartE2EDuration="3.889486468s" podCreationTimestamp="2026-03-11 02:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:24:18.877479256 +0000 UTC m=+1.294064709" watchObservedRunningTime="2026-03-11 02:24:18.889486468 +0000 UTC m=+1.306071921" Mar 11 02:24:18.906106 kubelet[2645]: I0311 02:24:18.905933 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.9059160090000002 podStartE2EDuration="3.905916009s" podCreationTimestamp="2026-03-11 02:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:24:18.904064563 +0000 UTC m=+1.320650016" watchObservedRunningTime="2026-03-11 02:24:18.905916009 +0000 UTC m=+1.322501462" Mar 11 02:24:18.906495 kubelet[2645]: I0311 02:24:18.906140 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.90613118 podStartE2EDuration="3.90613118s" podCreationTimestamp="2026-03-11 02:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:24:18.889698423 +0000 UTC m=+1.306283877" watchObservedRunningTime="2026-03-11 02:24:18.90613118 +0000 UTC m=+1.322716634" Mar 11 02:24:19.796239 kubelet[2645]: E0311 02:24:19.795774 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:19.796239 kubelet[2645]: E0311 02:24:19.795790 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:21.787746 kubelet[2645]: E0311 02:24:21.787646 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:21.856227 kubelet[2645]: E0311 02:24:21.856094 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:22.398041 kubelet[2645]: I0311 02:24:22.397931 2645 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 11 02:24:22.398660 containerd[1560]: time="2026-03-11T02:24:22.398566291Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 11 02:24:22.399549 kubelet[2645]: I0311 02:24:22.399486 2645 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 11 02:24:23.271094 kubelet[2645]: I0311 02:24:23.270915 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c413c382-a8fd-4d75-97d8-5e1ed79a4c41-kube-proxy\") pod \"kube-proxy-zxzgn\" (UID: \"c413c382-a8fd-4d75-97d8-5e1ed79a4c41\") " pod="kube-system/kube-proxy-zxzgn" Mar 11 02:24:23.271094 kubelet[2645]: I0311 02:24:23.271003 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c413c382-a8fd-4d75-97d8-5e1ed79a4c41-xtables-lock\") pod \"kube-proxy-zxzgn\" (UID: \"c413c382-a8fd-4d75-97d8-5e1ed79a4c41\") " pod="kube-system/kube-proxy-zxzgn" Mar 11 02:24:23.271094 kubelet[2645]: I0311 02:24:23.271030 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c413c382-a8fd-4d75-97d8-5e1ed79a4c41-lib-modules\") pod \"kube-proxy-zxzgn\" (UID: \"c413c382-a8fd-4d75-97d8-5e1ed79a4c41\") " pod="kube-system/kube-proxy-zxzgn" Mar 11 02:24:23.271094 kubelet[2645]: I0311 02:24:23.271056 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwg6j\" (UniqueName: \"kubernetes.io/projected/c413c382-a8fd-4d75-97d8-5e1ed79a4c41-kube-api-access-rwg6j\") pod \"kube-proxy-zxzgn\" (UID: \"c413c382-a8fd-4d75-97d8-5e1ed79a4c41\") " pod="kube-system/kube-proxy-zxzgn" Mar 11 02:24:23.473850 kubelet[2645]: I0311 02:24:23.473690 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6765ecd3-5cff-42ba-aa6c-3561bef05b9a-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-nt9wk\" (UID: \"6765ecd3-5cff-42ba-aa6c-3561bef05b9a\") " pod="tigera-operator/tigera-operator-6bf85f8dd-nt9wk" Mar 11 02:24:23.473850 kubelet[2645]: I0311 02:24:23.473771 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqjb9\" (UniqueName: \"kubernetes.io/projected/6765ecd3-5cff-42ba-aa6c-3561bef05b9a-kube-api-access-pqjb9\") pod \"tigera-operator-6bf85f8dd-nt9wk\" (UID: \"6765ecd3-5cff-42ba-aa6c-3561bef05b9a\") " pod="tigera-operator/tigera-operator-6bf85f8dd-nt9wk" Mar 11 02:24:23.536437 kubelet[2645]: E0311 02:24:23.536206 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:23.537854 containerd[1560]: time="2026-03-11T02:24:23.537139421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zxzgn,Uid:c413c382-a8fd-4d75-97d8-5e1ed79a4c41,Namespace:kube-system,Attempt:0,}" Mar 11 02:24:23.568055 containerd[1560]: time="2026-03-11T02:24:23.566937689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:23.568055 containerd[1560]: time="2026-03-11T02:24:23.567987105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:23.568055 containerd[1560]: time="2026-03-11T02:24:23.568006389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:23.568268 containerd[1560]: time="2026-03-11T02:24:23.568107757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:23.616801 containerd[1560]: time="2026-03-11T02:24:23.616756033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zxzgn,Uid:c413c382-a8fd-4d75-97d8-5e1ed79a4c41,Namespace:kube-system,Attempt:0,} returns sandbox id \"b70009e9b54fb01b964192b89bb8b2daffed84a401d271cd88c2013d6bf261fb\"" Mar 11 02:24:23.617904 kubelet[2645]: E0311 02:24:23.617839 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:23.625275 containerd[1560]: time="2026-03-11T02:24:23.625227525Z" level=info msg="CreateContainer within sandbox \"b70009e9b54fb01b964192b89bb8b2daffed84a401d271cd88c2013d6bf261fb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 11 02:24:23.641922 containerd[1560]: time="2026-03-11T02:24:23.641742851Z" level=info msg="CreateContainer within sandbox \"b70009e9b54fb01b964192b89bb8b2daffed84a401d271cd88c2013d6bf261fb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e16609d6e72d4631c6b54d107b557e36efa94579e1c853b0ec61511091a67466\"" Mar 11 02:24:23.642479 containerd[1560]: time="2026-03-11T02:24:23.642457449Z" level=info msg="StartContainer for \"e16609d6e72d4631c6b54d107b557e36efa94579e1c853b0ec61511091a67466\"" Mar 11 02:24:23.720049 containerd[1560]: time="2026-03-11T02:24:23.719906329Z" level=info msg="StartContainer for \"e16609d6e72d4631c6b54d107b557e36efa94579e1c853b0ec61511091a67466\" returns successfully" Mar 11 02:24:23.738029 containerd[1560]: time="2026-03-11T02:24:23.737745917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-nt9wk,Uid:6765ecd3-5cff-42ba-aa6c-3561bef05b9a,Namespace:tigera-operator,Attempt:0,}" Mar 11 02:24:23.769490 containerd[1560]: time="2026-03-11T02:24:23.769377695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:23.769490 containerd[1560]: time="2026-03-11T02:24:23.769439239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:23.769888 containerd[1560]: time="2026-03-11T02:24:23.769493489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:23.769888 containerd[1560]: time="2026-03-11T02:24:23.769735294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:23.806905 kubelet[2645]: E0311 02:24:23.805818 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:23.856431 containerd[1560]: time="2026-03-11T02:24:23.854109031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-nt9wk,Uid:6765ecd3-5cff-42ba-aa6c-3561bef05b9a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ad6beaa573d32d1ebf28cb532d17d443807d60aa3a9456a4434298e2594d49e5\"" Mar 11 02:24:23.857443 containerd[1560]: time="2026-03-11T02:24:23.857383827Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 11 02:24:24.833502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1107736324.mount: Deactivated successfully. Mar 11 02:24:26.618543 containerd[1560]: time="2026-03-11T02:24:26.618431384Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:26.619418 containerd[1560]: time="2026-03-11T02:24:26.619222020Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 11 02:24:26.622349 containerd[1560]: time="2026-03-11T02:24:26.621759254Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:26.625855 containerd[1560]: time="2026-03-11T02:24:26.625767500Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:26.627343 containerd[1560]: time="2026-03-11T02:24:26.627265583Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.769819431s" Mar 11 02:24:26.627420 containerd[1560]: time="2026-03-11T02:24:26.627389382Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 11 02:24:26.633731 containerd[1560]: time="2026-03-11T02:24:26.633582559Z" level=info msg="CreateContainer within sandbox \"ad6beaa573d32d1ebf28cb532d17d443807d60aa3a9456a4434298e2594d49e5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 11 02:24:26.651594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3090525157.mount: Deactivated successfully. Mar 11 02:24:26.652739 containerd[1560]: time="2026-03-11T02:24:26.652609383Z" level=info msg="CreateContainer within sandbox \"ad6beaa573d32d1ebf28cb532d17d443807d60aa3a9456a4434298e2594d49e5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8cdeb6085a5068f33f19039ab0b36295bb2f4a64e8b0373f3cf729e9cce4b6bd\"" Mar 11 02:24:26.653822 containerd[1560]: time="2026-03-11T02:24:26.653768545Z" level=info msg="StartContainer for \"8cdeb6085a5068f33f19039ab0b36295bb2f4a64e8b0373f3cf729e9cce4b6bd\"" Mar 11 02:24:26.738035 containerd[1560]: time="2026-03-11T02:24:26.737982332Z" level=info msg="StartContainer for \"8cdeb6085a5068f33f19039ab0b36295bb2f4a64e8b0373f3cf729e9cce4b6bd\" returns successfully" Mar 11 02:24:26.831085 kubelet[2645]: I0311 02:24:26.830884 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zxzgn" podStartSLOduration=3.830855335 podStartE2EDuration="3.830855335s" podCreationTimestamp="2026-03-11 02:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:24:23.826368593 +0000 UTC m=+6.242954056" watchObservedRunningTime="2026-03-11 02:24:26.830855335 +0000 UTC m=+9.247440818" Mar 11 02:24:27.408969 kubelet[2645]: E0311 02:24:27.408839 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:27.426588 kubelet[2645]: I0311 02:24:27.426303 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-nt9wk" podStartSLOduration=1.654209746 podStartE2EDuration="4.426283906s" podCreationTimestamp="2026-03-11 02:24:23 +0000 UTC" firstStartedPulling="2026-03-11 02:24:23.856592999 +0000 UTC m=+6.273178462" lastFinishedPulling="2026-03-11 02:24:26.628667168 +0000 UTC m=+9.045252622" observedRunningTime="2026-03-11 02:24:26.83137892 +0000 UTC m=+9.247964393" watchObservedRunningTime="2026-03-11 02:24:27.426283906 +0000 UTC m=+9.842869360" Mar 11 02:24:27.821052 kubelet[2645]: E0311 02:24:27.820989 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:31.794938 kubelet[2645]: E0311 02:24:31.794812 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:31.861303 kubelet[2645]: E0311 02:24:31.861233 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:32.238713 kubelet[2645]: I0311 02:24:32.238590 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbb11cd9-7003-4fa0-92e7-aed852f6d737-tigera-ca-bundle\") pod \"calico-typha-5d8dcc4bfc-mrqkj\" (UID: \"dbb11cd9-7003-4fa0-92e7-aed852f6d737\") " pod="calico-system/calico-typha-5d8dcc4bfc-mrqkj" Mar 11 02:24:32.239221 kubelet[2645]: I0311 02:24:32.238963 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dbb11cd9-7003-4fa0-92e7-aed852f6d737-typha-certs\") pod \"calico-typha-5d8dcc4bfc-mrqkj\" (UID: \"dbb11cd9-7003-4fa0-92e7-aed852f6d737\") " pod="calico-system/calico-typha-5d8dcc4bfc-mrqkj" Mar 11 02:24:32.239221 kubelet[2645]: I0311 02:24:32.239176 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwrcp\" (UniqueName: \"kubernetes.io/projected/dbb11cd9-7003-4fa0-92e7-aed852f6d737-kube-api-access-dwrcp\") pod \"calico-typha-5d8dcc4bfc-mrqkj\" (UID: \"dbb11cd9-7003-4fa0-92e7-aed852f6d737\") " pod="calico-system/calico-typha-5d8dcc4bfc-mrqkj" Mar 11 02:24:32.341559 kubelet[2645]: I0311 02:24:32.339732 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-var-run-calico\") pod \"calico-node-bb67r\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " pod="calico-system/calico-node-bb67r" Mar 11 02:24:32.341559 kubelet[2645]: I0311 02:24:32.339777 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/36520cef-30c2-4403-b367-6e5ba591923f-node-certs\") pod \"calico-node-bb67r\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " pod="calico-system/calico-node-bb67r" Mar 11 02:24:32.341559 kubelet[2645]: I0311 02:24:32.339792 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-nodeproc\") pod \"calico-node-bb67r\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " pod="calico-system/calico-node-bb67r" Mar 11 02:24:32.341559 kubelet[2645]: I0311 02:24:32.339807 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-cni-log-dir\") pod \"calico-node-bb67r\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " pod="calico-system/calico-node-bb67r" Mar 11 02:24:32.341559 kubelet[2645]: I0311 02:24:32.339864 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-policysync\") pod \"calico-node-bb67r\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " pod="calico-system/calico-node-bb67r" Mar 11 02:24:32.341942 kubelet[2645]: I0311 02:24:32.339897 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-cni-bin-dir\") pod \"calico-node-bb67r\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " pod="calico-system/calico-node-bb67r" Mar 11 02:24:32.341942 kubelet[2645]: I0311 02:24:32.339921 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-cni-net-dir\") pod \"calico-node-bb67r\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " pod="calico-system/calico-node-bb67r" Mar 11 02:24:32.341942 kubelet[2645]: I0311 02:24:32.339945 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-flexvol-driver-host\") pod \"calico-node-bb67r\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " pod="calico-system/calico-node-bb67r" Mar 11 02:24:32.341942 kubelet[2645]: I0311 02:24:32.339968 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-xtables-lock\") pod \"calico-node-bb67r\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " pod="calico-system/calico-node-bb67r" Mar 11 02:24:32.341942 kubelet[2645]: I0311 02:24:32.339991 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z9bt\" (UniqueName: \"kubernetes.io/projected/36520cef-30c2-4403-b367-6e5ba591923f-kube-api-access-7z9bt\") pod \"calico-node-bb67r\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " pod="calico-system/calico-node-bb67r" Mar 11 02:24:32.342206 kubelet[2645]: I0311 02:24:32.340023 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-bpffs\") pod \"calico-node-bb67r\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " pod="calico-system/calico-node-bb67r" Mar 11 02:24:32.342206 kubelet[2645]: I0311 02:24:32.340206 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-lib-modules\") pod \"calico-node-bb67r\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " pod="calico-system/calico-node-bb67r" Mar 11 02:24:32.342206 kubelet[2645]: I0311 02:24:32.340234 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-sys-fs\") pod \"calico-node-bb67r\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " pod="calico-system/calico-node-bb67r" Mar 11 02:24:32.342206 kubelet[2645]: I0311 02:24:32.340267 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-var-lib-calico\") pod \"calico-node-bb67r\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " pod="calico-system/calico-node-bb67r" Mar 11 02:24:32.342206 kubelet[2645]: I0311 02:24:32.340294 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36520cef-30c2-4403-b367-6e5ba591923f-tigera-ca-bundle\") pod \"calico-node-bb67r\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " pod="calico-system/calico-node-bb67r" Mar 11 02:24:32.388622 kubelet[2645]: E0311 02:24:32.388497 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpgpj" podUID="93716d33-580c-4cc6-a4e6-074492c5ede3" Mar 11 02:24:32.441738 kubelet[2645]: I0311 02:24:32.441161 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/93716d33-580c-4cc6-a4e6-074492c5ede3-registration-dir\") pod \"csi-node-driver-fpgpj\" (UID: \"93716d33-580c-4cc6-a4e6-074492c5ede3\") " pod="calico-system/csi-node-driver-fpgpj" Mar 11 02:24:32.441738 kubelet[2645]: I0311 02:24:32.441215 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/93716d33-580c-4cc6-a4e6-074492c5ede3-socket-dir\") pod \"csi-node-driver-fpgpj\" (UID: \"93716d33-580c-4cc6-a4e6-074492c5ede3\") " pod="calico-system/csi-node-driver-fpgpj" Mar 11 02:24:32.441738 kubelet[2645]: I0311 02:24:32.441252 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/93716d33-580c-4cc6-a4e6-074492c5ede3-varrun\") pod \"csi-node-driver-fpgpj\" (UID: \"93716d33-580c-4cc6-a4e6-074492c5ede3\") " pod="calico-system/csi-node-driver-fpgpj" Mar 11 02:24:32.441738 kubelet[2645]: I0311 02:24:32.441413 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/93716d33-580c-4cc6-a4e6-074492c5ede3-kubelet-dir\") pod \"csi-node-driver-fpgpj\" (UID: \"93716d33-580c-4cc6-a4e6-074492c5ede3\") " pod="calico-system/csi-node-driver-fpgpj" Mar 11 02:24:32.441738 kubelet[2645]: I0311 02:24:32.441439 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5vcv\" (UniqueName: \"kubernetes.io/projected/93716d33-580c-4cc6-a4e6-074492c5ede3-kube-api-access-q5vcv\") pod \"csi-node-driver-fpgpj\" (UID: \"93716d33-580c-4cc6-a4e6-074492c5ede3\") " pod="calico-system/csi-node-driver-fpgpj" Mar 11 02:24:32.444426 kubelet[2645]: E0311 02:24:32.444367 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.444670 kubelet[2645]: W0311 02:24:32.444513 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.445237 kubelet[2645]: E0311 02:24:32.445208 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.445714 kubelet[2645]: E0311 02:24:32.445670 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.445714 kubelet[2645]: W0311 02:24:32.445684 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.445714 kubelet[2645]: E0311 02:24:32.445699 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.446406 kubelet[2645]: E0311 02:24:32.446249 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.446406 kubelet[2645]: W0311 02:24:32.446263 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.446406 kubelet[2645]: E0311 02:24:32.446277 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.447142 kubelet[2645]: E0311 02:24:32.446973 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.447142 kubelet[2645]: W0311 02:24:32.446990 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.447142 kubelet[2645]: E0311 02:24:32.447007 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.447660 kubelet[2645]: E0311 02:24:32.447612 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.447660 kubelet[2645]: W0311 02:24:32.447629 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.447660 kubelet[2645]: E0311 02:24:32.447644 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.448266 kubelet[2645]: E0311 02:24:32.448195 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.448266 kubelet[2645]: W0311 02:24:32.448208 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.448266 kubelet[2645]: E0311 02:24:32.448220 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.453548 kubelet[2645]: E0311 02:24:32.453492 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.453548 kubelet[2645]: W0311 02:24:32.453537 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.454535 kubelet[2645]: E0311 02:24:32.453553 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.454535 kubelet[2645]: E0311 02:24:32.453843 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.454535 kubelet[2645]: W0311 02:24:32.453856 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.454535 kubelet[2645]: E0311 02:24:32.453866 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.454535 kubelet[2645]: E0311 02:24:32.454504 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.454535 kubelet[2645]: W0311 02:24:32.454520 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.454535 kubelet[2645]: E0311 02:24:32.454533 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.456952 kubelet[2645]: E0311 02:24:32.456049 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.456952 kubelet[2645]: W0311 02:24:32.456063 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.456952 kubelet[2645]: E0311 02:24:32.456147 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.459550 kubelet[2645]: E0311 02:24:32.457702 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.459550 kubelet[2645]: W0311 02:24:32.457719 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.459550 kubelet[2645]: E0311 02:24:32.457730 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.459550 kubelet[2645]: E0311 02:24:32.458161 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.459550 kubelet[2645]: W0311 02:24:32.458169 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.459550 kubelet[2645]: E0311 02:24:32.458178 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.459550 kubelet[2645]: E0311 02:24:32.458503 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.459550 kubelet[2645]: W0311 02:24:32.458511 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.459550 kubelet[2645]: E0311 02:24:32.458519 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.459550 kubelet[2645]: E0311 02:24:32.458871 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.459905 kubelet[2645]: W0311 02:24:32.458879 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.459905 kubelet[2645]: E0311 02:24:32.458889 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.460154 kubelet[2645]: E0311 02:24:32.460058 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.460154 kubelet[2645]: W0311 02:24:32.460110 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.460154 kubelet[2645]: E0311 02:24:32.460121 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.462186 kubelet[2645]: E0311 02:24:32.462170 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.462471 kubelet[2645]: W0311 02:24:32.462254 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.462471 kubelet[2645]: E0311 02:24:32.462271 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.464490 kubelet[2645]: E0311 02:24:32.464450 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.464490 kubelet[2645]: W0311 02:24:32.464483 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.464581 kubelet[2645]: E0311 02:24:32.464496 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.466369 kubelet[2645]: E0311 02:24:32.464948 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.466369 kubelet[2645]: W0311 02:24:32.464963 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.466369 kubelet[2645]: E0311 02:24:32.464978 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.466597 kubelet[2645]: E0311 02:24:32.466549 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.466597 kubelet[2645]: W0311 02:24:32.466589 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.466676 kubelet[2645]: E0311 02:24:32.466602 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.467645 kubelet[2645]: E0311 02:24:32.467592 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.467645 kubelet[2645]: W0311 02:24:32.467642 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.467731 kubelet[2645]: E0311 02:24:32.467656 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.469510 kubelet[2645]: E0311 02:24:32.469465 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.469510 kubelet[2645]: W0311 02:24:32.469505 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.469625 kubelet[2645]: E0311 02:24:32.469520 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.472393 kubelet[2645]: E0311 02:24:32.471437 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.472393 kubelet[2645]: W0311 02:24:32.471451 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.472393 kubelet[2645]: E0311 02:24:32.471466 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.474594 kubelet[2645]: E0311 02:24:32.474447 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.474594 kubelet[2645]: W0311 02:24:32.474480 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.474594 kubelet[2645]: E0311 02:24:32.474495 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.474974 kubelet[2645]: E0311 02:24:32.474871 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.474974 kubelet[2645]: W0311 02:24:32.474905 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.474974 kubelet[2645]: E0311 02:24:32.474923 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.476484 kubelet[2645]: E0311 02:24:32.476450 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.476484 kubelet[2645]: W0311 02:24:32.476482 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.476587 kubelet[2645]: E0311 02:24:32.476498 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.477834 kubelet[2645]: E0311 02:24:32.477800 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.477899 kubelet[2645]: W0311 02:24:32.477834 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.477899 kubelet[2645]: E0311 02:24:32.477850 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.478284 kubelet[2645]: E0311 02:24:32.478227 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.478284 kubelet[2645]: W0311 02:24:32.478265 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.478284 kubelet[2645]: E0311 02:24:32.478277 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.478801 kubelet[2645]: E0311 02:24:32.478761 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.478801 kubelet[2645]: W0311 02:24:32.478775 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.478801 kubelet[2645]: E0311 02:24:32.478786 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.479547 kubelet[2645]: E0311 02:24:32.479471 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.479547 kubelet[2645]: W0311 02:24:32.479485 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.479547 kubelet[2645]: E0311 02:24:32.479496 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.480391 kubelet[2645]: E0311 02:24:32.480288 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.480391 kubelet[2645]: W0311 02:24:32.480373 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.480391 kubelet[2645]: E0311 02:24:32.480385 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.482547 kubelet[2645]: E0311 02:24:32.482512 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.482614 kubelet[2645]: W0311 02:24:32.482549 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.482614 kubelet[2645]: E0311 02:24:32.482562 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.483305 kubelet[2645]: E0311 02:24:32.483128 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.483305 kubelet[2645]: W0311 02:24:32.483144 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.483305 kubelet[2645]: E0311 02:24:32.483157 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.485902 kubelet[2645]: E0311 02:24:32.483860 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.485902 kubelet[2645]: W0311 02:24:32.483871 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.485902 kubelet[2645]: E0311 02:24:32.483880 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.485902 kubelet[2645]: E0311 02:24:32.484908 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.485902 kubelet[2645]: W0311 02:24:32.484915 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.485902 kubelet[2645]: E0311 02:24:32.484932 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.485902 kubelet[2645]: E0311 02:24:32.485287 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.485902 kubelet[2645]: W0311 02:24:32.485295 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.485902 kubelet[2645]: E0311 02:24:32.485304 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.485902 kubelet[2645]: E0311 02:24:32.485711 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.486264 kubelet[2645]: W0311 02:24:32.485723 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.486264 kubelet[2645]: E0311 02:24:32.485737 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.486264 kubelet[2645]: E0311 02:24:32.486065 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.486264 kubelet[2645]: W0311 02:24:32.486073 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.486264 kubelet[2645]: E0311 02:24:32.486120 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.487271 kubelet[2645]: E0311 02:24:32.486635 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.487271 kubelet[2645]: W0311 02:24:32.486651 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.487271 kubelet[2645]: E0311 02:24:32.486662 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.487504 kubelet[2645]: E0311 02:24:32.487275 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.488483 kubelet[2645]: W0311 02:24:32.487563 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.488483 kubelet[2645]: E0311 02:24:32.487579 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.488601 kubelet[2645]: E0311 02:24:32.488538 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.488601 kubelet[2645]: W0311 02:24:32.488551 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.488601 kubelet[2645]: E0311 02:24:32.488564 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.491049 kubelet[2645]: E0311 02:24:32.489576 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.491049 kubelet[2645]: W0311 02:24:32.489613 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.491049 kubelet[2645]: E0311 02:24:32.489627 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.495984 kubelet[2645]: E0311 02:24:32.495786 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.495984 kubelet[2645]: W0311 02:24:32.495802 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.495984 kubelet[2645]: E0311 02:24:32.495814 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.497172 kubelet[2645]: E0311 02:24:32.496849 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.497172 kubelet[2645]: W0311 02:24:32.496863 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.497172 kubelet[2645]: E0311 02:24:32.496875 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.497485 kubelet[2645]: E0311 02:24:32.497405 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.497552 kubelet[2645]: W0311 02:24:32.497539 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.497608 kubelet[2645]: E0311 02:24:32.497596 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.499644 kubelet[2645]: E0311 02:24:32.499629 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.499914 kubelet[2645]: W0311 02:24:32.499767 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.499914 kubelet[2645]: E0311 02:24:32.499795 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.502533 kubelet[2645]: E0311 02:24:32.502519 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.502788 kubelet[2645]: W0311 02:24:32.502672 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.502788 kubelet[2645]: E0311 02:24:32.502690 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.503720 kubelet[2645]: E0311 02:24:32.503648 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.503720 kubelet[2645]: W0311 02:24:32.503661 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.503720 kubelet[2645]: E0311 02:24:32.503673 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.504884 kubelet[2645]: E0311 02:24:32.504864 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.505301 kubelet[2645]: W0311 02:24:32.505050 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.505301 kubelet[2645]: E0311 02:24:32.505066 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.506984 kubelet[2645]: E0311 02:24:32.506916 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.506984 kubelet[2645]: W0311 02:24:32.506926 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.506984 kubelet[2645]: E0311 02:24:32.506936 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.507908 kubelet[2645]: E0311 02:24:32.507662 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.507908 kubelet[2645]: W0311 02:24:32.507676 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.507908 kubelet[2645]: E0311 02:24:32.507691 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.508154 kubelet[2645]: E0311 02:24:32.508002 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.508154 kubelet[2645]: W0311 02:24:32.508011 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.508154 kubelet[2645]: E0311 02:24:32.508022 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.508797 kubelet[2645]: E0311 02:24:32.508753 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.508797 kubelet[2645]: W0311 02:24:32.508765 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.508797 kubelet[2645]: E0311 02:24:32.508774 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.509159 kubelet[2645]: E0311 02:24:32.509058 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.509159 kubelet[2645]: W0311 02:24:32.509106 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.509159 kubelet[2645]: E0311 02:24:32.509115 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.509654 kubelet[2645]: E0311 02:24:32.509579 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.509654 kubelet[2645]: W0311 02:24:32.509603 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.509654 kubelet[2645]: E0311 02:24:32.509614 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.510909 kubelet[2645]: E0311 02:24:32.510736 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.510909 kubelet[2645]: W0311 02:24:32.510748 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.510909 kubelet[2645]: E0311 02:24:32.510757 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.512066 kubelet[2645]: E0311 02:24:32.512036 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.512066 kubelet[2645]: W0311 02:24:32.512061 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.512231 kubelet[2645]: E0311 02:24:32.512072 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.514385 kubelet[2645]: E0311 02:24:32.513393 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.514385 kubelet[2645]: W0311 02:24:32.513405 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.514385 kubelet[2645]: E0311 02:24:32.513416 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.525969 kubelet[2645]: E0311 02:24:32.525910 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.525969 kubelet[2645]: W0311 02:24:32.525947 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.525969 kubelet[2645]: E0311 02:24:32.525966 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.528874 kubelet[2645]: E0311 02:24:32.528767 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:32.533735 containerd[1560]: time="2026-03-11T02:24:32.531400836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d8dcc4bfc-mrqkj,Uid:dbb11cd9-7003-4fa0-92e7-aed852f6d737,Namespace:calico-system,Attempt:0,}" Mar 11 02:24:32.542306 kubelet[2645]: E0311 02:24:32.542155 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.542306 kubelet[2645]: W0311 02:24:32.542177 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.542306 kubelet[2645]: E0311 02:24:32.542196 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.544557 kubelet[2645]: E0311 02:24:32.544387 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.544557 kubelet[2645]: W0311 02:24:32.544401 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.544557 kubelet[2645]: E0311 02:24:32.544417 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.544897 kubelet[2645]: E0311 02:24:32.544817 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.544897 kubelet[2645]: W0311 02:24:32.544829 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.544897 kubelet[2645]: E0311 02:24:32.544841 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.545896 kubelet[2645]: E0311 02:24:32.545244 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.545896 kubelet[2645]: W0311 02:24:32.545254 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.545896 kubelet[2645]: E0311 02:24:32.545265 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.546509 kubelet[2645]: E0311 02:24:32.546475 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.546509 kubelet[2645]: W0311 02:24:32.546487 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.546509 kubelet[2645]: E0311 02:24:32.546499 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.549821 kubelet[2645]: E0311 02:24:32.549701 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.549821 kubelet[2645]: W0311 02:24:32.549755 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.549821 kubelet[2645]: E0311 02:24:32.549768 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.551462 kubelet[2645]: E0311 02:24:32.550014 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.551462 kubelet[2645]: W0311 02:24:32.550026 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.551462 kubelet[2645]: E0311 02:24:32.550037 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.551462 kubelet[2645]: E0311 02:24:32.551059 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.551462 kubelet[2645]: W0311 02:24:32.551069 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.551462 kubelet[2645]: E0311 02:24:32.551117 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.552015 kubelet[2645]: E0311 02:24:32.551806 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.552015 kubelet[2645]: W0311 02:24:32.551815 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.552015 kubelet[2645]: E0311 02:24:32.551825 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.554270 kubelet[2645]: E0311 02:24:32.554214 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.554270 kubelet[2645]: W0311 02:24:32.554246 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.554270 kubelet[2645]: E0311 02:24:32.554256 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.555495 kubelet[2645]: E0311 02:24:32.555305 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.555495 kubelet[2645]: W0311 02:24:32.555377 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.555495 kubelet[2645]: E0311 02:24:32.555387 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.555841 kubelet[2645]: E0311 02:24:32.555649 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.555841 kubelet[2645]: W0311 02:24:32.555661 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.555841 kubelet[2645]: E0311 02:24:32.555672 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.558372 kubelet[2645]: E0311 02:24:32.557852 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.558372 kubelet[2645]: W0311 02:24:32.557864 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.558372 kubelet[2645]: E0311 02:24:32.557874 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.559534 kubelet[2645]: E0311 02:24:32.558802 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.559534 kubelet[2645]: W0311 02:24:32.558812 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.559534 kubelet[2645]: E0311 02:24:32.558821 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.565483 kubelet[2645]: E0311 02:24:32.565445 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.565483 kubelet[2645]: W0311 02:24:32.565480 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.565566 kubelet[2645]: E0311 02:24:32.565493 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.567048 kubelet[2645]: E0311 02:24:32.566994 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.567048 kubelet[2645]: W0311 02:24:32.567027 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.567048 kubelet[2645]: E0311 02:24:32.567039 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.567754 kubelet[2645]: E0311 02:24:32.567366 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.567754 kubelet[2645]: W0311 02:24:32.567378 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.567754 kubelet[2645]: E0311 02:24:32.567388 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.578145 kubelet[2645]: E0311 02:24:32.572153 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.578145 kubelet[2645]: W0311 02:24:32.572167 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.578145 kubelet[2645]: E0311 02:24:32.572179 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.578145 kubelet[2645]: E0311 02:24:32.572703 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.578145 kubelet[2645]: W0311 02:24:32.572713 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.578145 kubelet[2645]: E0311 02:24:32.572723 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.578145 kubelet[2645]: E0311 02:24:32.573551 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.578145 kubelet[2645]: W0311 02:24:32.573560 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.578145 kubelet[2645]: E0311 02:24:32.573569 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.578145 kubelet[2645]: E0311 02:24:32.576578 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.578478 kubelet[2645]: W0311 02:24:32.576588 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.578478 kubelet[2645]: E0311 02:24:32.576598 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.578478 kubelet[2645]: E0311 02:24:32.577594 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.578478 kubelet[2645]: W0311 02:24:32.577604 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.578478 kubelet[2645]: E0311 02:24:32.577682 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.578905 kubelet[2645]: E0311 02:24:32.578891 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.579970 kubelet[2645]: W0311 02:24:32.579297 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.580226 kubelet[2645]: E0311 02:24:32.580190 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.583538 kubelet[2645]: E0311 02:24:32.583395 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.583538 kubelet[2645]: W0311 02:24:32.583408 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.583538 kubelet[2645]: E0311 02:24:32.583418 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.585425 kubelet[2645]: E0311 02:24:32.585251 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.585686 kubelet[2645]: W0311 02:24:32.585661 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.585686 kubelet[2645]: E0311 02:24:32.585677 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.586842 containerd[1560]: time="2026-03-11T02:24:32.586739456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bb67r,Uid:36520cef-30c2-4403-b367-6e5ba591923f,Namespace:calico-system,Attempt:0,}" Mar 11 02:24:32.609575 containerd[1560]: time="2026-03-11T02:24:32.608302034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:32.609575 containerd[1560]: time="2026-03-11T02:24:32.608427507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:32.609575 containerd[1560]: time="2026-03-11T02:24:32.608439348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:32.613650 containerd[1560]: time="2026-03-11T02:24:32.613215664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:32.615768 kubelet[2645]: E0311 02:24:32.615543 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.615768 kubelet[2645]: W0311 02:24:32.615560 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.615768 kubelet[2645]: E0311 02:24:32.615578 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.643192 containerd[1560]: time="2026-03-11T02:24:32.642867385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:32.644054 containerd[1560]: time="2026-03-11T02:24:32.643883980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:32.644168 containerd[1560]: time="2026-03-11T02:24:32.643933221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:32.644168 containerd[1560]: time="2026-03-11T02:24:32.644023439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:32.684633 sudo[1751]: pam_unix(sudo:session): session closed for user root Mar 11 02:24:32.690514 sshd[1744]: pam_unix(sshd:session): session closed for user core Mar 11 02:24:32.696586 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:58846.service: Deactivated successfully. Mar 11 02:24:32.701724 systemd[1]: session-7.scope: Deactivated successfully. Mar 11 02:24:32.702286 systemd-logind[1545]: Session 7 logged out. Waiting for processes to exit. Mar 11 02:24:32.706916 systemd-logind[1545]: Removed session 7. Mar 11 02:24:32.793664 containerd[1560]: time="2026-03-11T02:24:32.793623309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d8dcc4bfc-mrqkj,Uid:dbb11cd9-7003-4fa0-92e7-aed852f6d737,Namespace:calico-system,Attempt:0,} returns sandbox id \"d84856b9641337c27e43c71a5ee754083d56380d8e234aedda053f8f1e55b3ff\"" Mar 11 02:24:32.802896 containerd[1560]: time="2026-03-11T02:24:32.802809515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bb67r,Uid:36520cef-30c2-4403-b367-6e5ba591923f,Namespace:calico-system,Attempt:0,} returns sandbox id \"3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe\"" Mar 11 02:24:32.803271 kubelet[2645]: E0311 02:24:32.803230 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:32.812498 containerd[1560]: time="2026-03-11T02:24:32.812255527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 11 02:24:32.836892 kubelet[2645]: E0311 02:24:32.836865 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:32.937769 kubelet[2645]: E0311 02:24:32.937738 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.937769 kubelet[2645]: W0311 02:24:32.937767 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.938277 kubelet[2645]: E0311 02:24:32.937799 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.938955 kubelet[2645]: E0311 02:24:32.938877 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.938955 kubelet[2645]: W0311 02:24:32.938918 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.939183 kubelet[2645]: E0311 02:24:32.938960 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.939541 kubelet[2645]: E0311 02:24:32.939516 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.939617 kubelet[2645]: W0311 02:24:32.939545 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.939617 kubelet[2645]: E0311 02:24:32.939561 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.940145 kubelet[2645]: E0311 02:24:32.940127 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.940183 kubelet[2645]: W0311 02:24:32.940146 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.940183 kubelet[2645]: E0311 02:24:32.940162 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:32.940699 kubelet[2645]: E0311 02:24:32.940616 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:24:32.940699 kubelet[2645]: W0311 02:24:32.940646 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:24:32.940699 kubelet[2645]: E0311 02:24:32.940659 2645 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:24:33.411896 containerd[1560]: time="2026-03-11T02:24:33.411843170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:33.412795 containerd[1560]: time="2026-03-11T02:24:33.412739163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 11 02:24:33.413899 containerd[1560]: time="2026-03-11T02:24:33.413843335Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:33.416180 containerd[1560]: time="2026-03-11T02:24:33.416120523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:33.416852 containerd[1560]: time="2026-03-11T02:24:33.416802976Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 604.518154ms" Mar 11 02:24:33.416852 containerd[1560]: time="2026-03-11T02:24:33.416843782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 11 02:24:33.417926 containerd[1560]: time="2026-03-11T02:24:33.417882744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 11 02:24:33.422855 containerd[1560]: time="2026-03-11T02:24:33.422809675Z" level=info msg="CreateContainer within sandbox \"3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 11 02:24:33.439587 containerd[1560]: time="2026-03-11T02:24:33.439555661Z" level=info msg="CreateContainer within sandbox \"3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1\"" Mar 11 02:24:33.440410 containerd[1560]: time="2026-03-11T02:24:33.440380591Z" level=info msg="StartContainer for \"f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1\"" Mar 11 02:24:33.522395 containerd[1560]: time="2026-03-11T02:24:33.520865100Z" level=info msg="StartContainer for \"f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1\" returns successfully" Mar 11 02:24:33.611452 containerd[1560]: time="2026-03-11T02:24:33.611263606Z" level=info msg="shim disconnected" id=f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1 namespace=k8s.io Mar 11 02:24:33.611452 containerd[1560]: time="2026-03-11T02:24:33.611399998Z" level=warning msg="cleaning up after shim disconnected" id=f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1 namespace=k8s.io Mar 11 02:24:33.611452 containerd[1560]: time="2026-03-11T02:24:33.611416499Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:24:33.770691 kubelet[2645]: E0311 02:24:33.769162 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpgpj" podUID="93716d33-580c-4cc6-a4e6-074492c5ede3" Mar 11 02:24:34.354453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1-rootfs.mount: Deactivated successfully. Mar 11 02:24:34.713050 containerd[1560]: time="2026-03-11T02:24:34.712888870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:34.714225 containerd[1560]: time="2026-03-11T02:24:34.714148596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 11 02:24:34.715397 containerd[1560]: time="2026-03-11T02:24:34.715291003Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:34.718250 containerd[1560]: time="2026-03-11T02:24:34.718186059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:34.718987 containerd[1560]: time="2026-03-11T02:24:34.718931922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.300998274s" Mar 11 02:24:34.718987 containerd[1560]: time="2026-03-11T02:24:34.718981985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 11 02:24:34.720580 containerd[1560]: time="2026-03-11T02:24:34.720528857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 11 02:24:34.736685 containerd[1560]: time="2026-03-11T02:24:34.736622746Z" level=info msg="CreateContainer within sandbox \"d84856b9641337c27e43c71a5ee754083d56380d8e234aedda053f8f1e55b3ff\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 11 02:24:34.757290 containerd[1560]: time="2026-03-11T02:24:34.757235494Z" level=info msg="CreateContainer within sandbox \"d84856b9641337c27e43c71a5ee754083d56380d8e234aedda053f8f1e55b3ff\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7725b94d31de2c40e3c80b0886c720eb53890ab90c35444814d91be78730e554\"" Mar 11 02:24:34.758047 containerd[1560]: time="2026-03-11T02:24:34.757971898Z" level=info msg="StartContainer for \"7725b94d31de2c40e3c80b0886c720eb53890ab90c35444814d91be78730e554\"" Mar 11 02:24:34.849290 containerd[1560]: time="2026-03-11T02:24:34.847718471Z" level=info msg="StartContainer for \"7725b94d31de2c40e3c80b0886c720eb53890ab90c35444814d91be78730e554\" returns successfully" Mar 11 02:24:35.649492 update_engine[1550]: I20260311 02:24:35.649281 1550 update_attempter.cc:509] Updating boot flags... Mar 11 02:24:35.780772 kubelet[2645]: E0311 02:24:35.780677 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpgpj" podUID="93716d33-580c-4cc6-a4e6-074492c5ede3" Mar 11 02:24:35.805032 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3372) Mar 11 02:24:35.850458 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3372) Mar 11 02:24:35.859753 kubelet[2645]: E0311 02:24:35.858618 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:36.857627 kubelet[2645]: I0311 02:24:36.857592 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:24:36.858473 kubelet[2645]: E0311 02:24:36.858285 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:37.770215 kubelet[2645]: E0311 02:24:37.770173 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpgpj" podUID="93716d33-580c-4cc6-a4e6-074492c5ede3" Mar 11 02:24:38.579849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801229762.mount: Deactivated successfully. Mar 11 02:24:38.650715 containerd[1560]: time="2026-03-11T02:24:38.650595964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:38.651829 containerd[1560]: time="2026-03-11T02:24:38.651742585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 11 02:24:38.652757 containerd[1560]: time="2026-03-11T02:24:38.652648226Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:38.655513 containerd[1560]: time="2026-03-11T02:24:38.655419124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:38.656057 containerd[1560]: time="2026-03-11T02:24:38.655952671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 3.935368423s" Mar 11 02:24:38.656057 containerd[1560]: time="2026-03-11T02:24:38.656000250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 11 02:24:38.661563 containerd[1560]: time="2026-03-11T02:24:38.661504702Z" level=info msg="CreateContainer within sandbox \"3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 11 02:24:38.701127 containerd[1560]: time="2026-03-11T02:24:38.701023520Z" level=info msg="CreateContainer within sandbox \"3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847\"" Mar 11 02:24:38.702737 containerd[1560]: time="2026-03-11T02:24:38.701804931Z" level=info msg="StartContainer for \"d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847\"" Mar 11 02:24:38.955747 containerd[1560]: time="2026-03-11T02:24:38.955385518Z" level=info msg="StartContainer for \"d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847\" returns successfully" Mar 11 02:24:38.980847 kubelet[2645]: I0311 02:24:38.980516 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d8dcc4bfc-mrqkj" podStartSLOduration=5.073636155 podStartE2EDuration="6.980500831s" podCreationTimestamp="2026-03-11 02:24:32 +0000 UTC" firstStartedPulling="2026-03-11 02:24:32.813434132 +0000 UTC m=+15.230019585" lastFinishedPulling="2026-03-11 02:24:34.720298808 +0000 UTC m=+17.136884261" observedRunningTime="2026-03-11 02:24:35.892719016 +0000 UTC m=+18.309304470" watchObservedRunningTime="2026-03-11 02:24:38.980500831 +0000 UTC m=+21.397086284" Mar 11 02:24:38.984539 containerd[1560]: time="2026-03-11T02:24:38.984459911Z" level=info msg="shim disconnected" id=d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847 namespace=k8s.io Mar 11 02:24:38.984659 containerd[1560]: time="2026-03-11T02:24:38.984536905Z" level=warning msg="cleaning up after shim disconnected" id=d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847 namespace=k8s.io Mar 11 02:24:38.984659 containerd[1560]: time="2026-03-11T02:24:38.984550471Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:24:39.580598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847-rootfs.mount: Deactivated successfully. Mar 11 02:24:39.769679 kubelet[2645]: E0311 02:24:39.769620 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpgpj" podUID="93716d33-580c-4cc6-a4e6-074492c5ede3" Mar 11 02:24:39.966147 containerd[1560]: time="2026-03-11T02:24:39.965925925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 11 02:24:41.769957 kubelet[2645]: E0311 02:24:41.769814 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fpgpj" podUID="93716d33-580c-4cc6-a4e6-074492c5ede3" Mar 11 02:24:41.777119 containerd[1560]: time="2026-03-11T02:24:41.777011593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:41.777989 containerd[1560]: time="2026-03-11T02:24:41.777918428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 11 02:24:41.779256 containerd[1560]: time="2026-03-11T02:24:41.779169836Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:41.782002 containerd[1560]: time="2026-03-11T02:24:41.781929076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:41.783019 containerd[1560]: time="2026-03-11T02:24:41.782910410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.816936556s" Mar 11 02:24:41.783019 containerd[1560]: time="2026-03-11T02:24:41.782959191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 11 02:24:41.787979 containerd[1560]: time="2026-03-11T02:24:41.787908555Z" level=info msg="CreateContainer within sandbox \"3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 11 02:24:41.805919 containerd[1560]: time="2026-03-11T02:24:41.805818037Z" level=info msg="CreateContainer within sandbox \"3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274\"" Mar 11 02:24:41.806625 containerd[1560]: time="2026-03-11T02:24:41.806582057Z" level=info msg="StartContainer for \"eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274\"" Mar 11 02:24:41.938182 containerd[1560]: time="2026-03-11T02:24:41.938025069Z" level=info msg="StartContainer for \"eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274\" returns successfully" Mar 11 02:24:42.690034 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274-rootfs.mount: Deactivated successfully. Mar 11 02:24:42.692049 containerd[1560]: time="2026-03-11T02:24:42.690108628Z" level=info msg="shim disconnected" id=eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274 namespace=k8s.io Mar 11 02:24:42.692049 containerd[1560]: time="2026-03-11T02:24:42.690155875Z" level=warning msg="cleaning up after shim disconnected" id=eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274 namespace=k8s.io Mar 11 02:24:42.692049 containerd[1560]: time="2026-03-11T02:24:42.690164371Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:24:42.746718 kubelet[2645]: I0311 02:24:42.746669 2645 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 11 02:24:42.841790 kubelet[2645]: I0311 02:24:42.841410 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36343753-ac2b-410a-b28c-082e5d46c12d-config-volume\") pod \"coredns-674b8bbfcf-nrfsp\" (UID: \"36343753-ac2b-410a-b28c-082e5d46c12d\") " pod="kube-system/coredns-674b8bbfcf-nrfsp" Mar 11 02:24:42.841790 kubelet[2645]: I0311 02:24:42.841463 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2psfj\" (UniqueName: \"kubernetes.io/projected/5cfc5544-cb19-46e9-98ab-95d03c16b97a-kube-api-access-2psfj\") pod \"calico-kube-controllers-54b7f5d88d-jg49r\" (UID: \"5cfc5544-cb19-46e9-98ab-95d03c16b97a\") " pod="calico-system/calico-kube-controllers-54b7f5d88d-jg49r" Mar 11 02:24:42.841790 kubelet[2645]: I0311 02:24:42.841490 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/32818100-783d-4e9a-8ab2-cad80d846e18-whisker-backend-key-pair\") pod \"whisker-75c5945784-vvc5c\" (UID: \"32818100-783d-4e9a-8ab2-cad80d846e18\") " pod="calico-system/whisker-75c5945784-vvc5c" Mar 11 02:24:42.841790 kubelet[2645]: I0311 02:24:42.841545 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xzvm\" (UniqueName: \"kubernetes.io/projected/aa5ed09c-658b-4389-902b-dc4b31e7e361-kube-api-access-2xzvm\") pod \"calico-apiserver-7f6f69c4f8-jnmsd\" (UID: \"aa5ed09c-658b-4389-902b-dc4b31e7e361\") " pod="calico-system/calico-apiserver-7f6f69c4f8-jnmsd" Mar 11 02:24:42.841790 kubelet[2645]: I0311 02:24:42.841571 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r69fh\" (UniqueName: \"kubernetes.io/projected/36343753-ac2b-410a-b28c-082e5d46c12d-kube-api-access-r69fh\") pod \"coredns-674b8bbfcf-nrfsp\" (UID: \"36343753-ac2b-410a-b28c-082e5d46c12d\") " pod="kube-system/coredns-674b8bbfcf-nrfsp" Mar 11 02:24:42.843168 kubelet[2645]: I0311 02:24:42.841593 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb7rf\" (UniqueName: \"kubernetes.io/projected/a125c27c-9122-4fdb-a210-781209ab1769-kube-api-access-rb7rf\") pod \"calico-apiserver-7f6f69c4f8-rjhrp\" (UID: \"a125c27c-9122-4fdb-a210-781209ab1769\") " pod="calico-system/calico-apiserver-7f6f69c4f8-rjhrp" Mar 11 02:24:42.843168 kubelet[2645]: I0311 02:24:42.841618 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/32818100-783d-4e9a-8ab2-cad80d846e18-nginx-config\") pod \"whisker-75c5945784-vvc5c\" (UID: \"32818100-783d-4e9a-8ab2-cad80d846e18\") " pod="calico-system/whisker-75c5945784-vvc5c" Mar 11 02:24:42.843168 kubelet[2645]: I0311 02:24:42.841641 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32818100-783d-4e9a-8ab2-cad80d846e18-whisker-ca-bundle\") pod \"whisker-75c5945784-vvc5c\" (UID: \"32818100-783d-4e9a-8ab2-cad80d846e18\") " pod="calico-system/whisker-75c5945784-vvc5c" Mar 11 02:24:42.843168 kubelet[2645]: I0311 02:24:42.841670 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aa5ed09c-658b-4389-902b-dc4b31e7e361-calico-apiserver-certs\") pod \"calico-apiserver-7f6f69c4f8-jnmsd\" (UID: \"aa5ed09c-658b-4389-902b-dc4b31e7e361\") " pod="calico-system/calico-apiserver-7f6f69c4f8-jnmsd" Mar 11 02:24:42.843168 kubelet[2645]: I0311 02:24:42.841698 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db4d7\" (UniqueName: \"kubernetes.io/projected/32818100-783d-4e9a-8ab2-cad80d846e18-kube-api-access-db4d7\") pod \"whisker-75c5945784-vvc5c\" (UID: \"32818100-783d-4e9a-8ab2-cad80d846e18\") " pod="calico-system/whisker-75c5945784-vvc5c" Mar 11 02:24:42.843282 kubelet[2645]: I0311 02:24:42.841731 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cfc5544-cb19-46e9-98ab-95d03c16b97a-tigera-ca-bundle\") pod \"calico-kube-controllers-54b7f5d88d-jg49r\" (UID: \"5cfc5544-cb19-46e9-98ab-95d03c16b97a\") " pod="calico-system/calico-kube-controllers-54b7f5d88d-jg49r" Mar 11 02:24:42.843282 kubelet[2645]: I0311 02:24:42.841756 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a125c27c-9122-4fdb-a210-781209ab1769-calico-apiserver-certs\") pod \"calico-apiserver-7f6f69c4f8-rjhrp\" (UID: \"a125c27c-9122-4fdb-a210-781209ab1769\") " pod="calico-system/calico-apiserver-7f6f69c4f8-rjhrp" Mar 11 02:24:42.942597 kubelet[2645]: I0311 02:24:42.942387 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrng6\" (UniqueName: \"kubernetes.io/projected/d34f0a4a-6b4f-4253-99ed-b4cbdf239525-kube-api-access-xrng6\") pod \"coredns-674b8bbfcf-m74p4\" (UID: \"d34f0a4a-6b4f-4253-99ed-b4cbdf239525\") " pod="kube-system/coredns-674b8bbfcf-m74p4" Mar 11 02:24:42.942597 kubelet[2645]: I0311 02:24:42.942449 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc195321-fa68-41a6-b9ce-01e15b82c109-config\") pod \"goldmane-5b85766d88-mht4v\" (UID: \"bc195321-fa68-41a6-b9ce-01e15b82c109\") " pod="calico-system/goldmane-5b85766d88-mht4v" Mar 11 02:24:42.942597 kubelet[2645]: I0311 02:24:42.942481 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qpkc\" (UniqueName: \"kubernetes.io/projected/bc195321-fa68-41a6-b9ce-01e15b82c109-kube-api-access-5qpkc\") pod \"goldmane-5b85766d88-mht4v\" (UID: \"bc195321-fa68-41a6-b9ce-01e15b82c109\") " pod="calico-system/goldmane-5b85766d88-mht4v" Mar 11 02:24:42.942597 kubelet[2645]: I0311 02:24:42.942520 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc195321-fa68-41a6-b9ce-01e15b82c109-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-mht4v\" (UID: \"bc195321-fa68-41a6-b9ce-01e15b82c109\") " pod="calico-system/goldmane-5b85766d88-mht4v" Mar 11 02:24:42.942597 kubelet[2645]: I0311 02:24:42.942573 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d34f0a4a-6b4f-4253-99ed-b4cbdf239525-config-volume\") pod \"coredns-674b8bbfcf-m74p4\" (UID: \"d34f0a4a-6b4f-4253-99ed-b4cbdf239525\") " pod="kube-system/coredns-674b8bbfcf-m74p4" Mar 11 02:24:42.942807 kubelet[2645]: I0311 02:24:42.942589 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bc195321-fa68-41a6-b9ce-01e15b82c109-goldmane-key-pair\") pod \"goldmane-5b85766d88-mht4v\" (UID: \"bc195321-fa68-41a6-b9ce-01e15b82c109\") " pod="calico-system/goldmane-5b85766d88-mht4v" Mar 11 02:24:43.000823 containerd[1560]: time="2026-03-11T02:24:43.000261438Z" level=info msg="CreateContainer within sandbox \"3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 11 02:24:43.020150 containerd[1560]: time="2026-03-11T02:24:43.020000835Z" level=info msg="CreateContainer within sandbox \"3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510\"" Mar 11 02:24:43.020973 containerd[1560]: time="2026-03-11T02:24:43.020899267Z" level=info msg="StartContainer for \"0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510\"" Mar 11 02:24:43.100626 containerd[1560]: time="2026-03-11T02:24:43.100524257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6f69c4f8-jnmsd,Uid:aa5ed09c-658b-4389-902b-dc4b31e7e361,Namespace:calico-system,Attempt:0,}" Mar 11 02:24:43.116193 containerd[1560]: time="2026-03-11T02:24:43.116151474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b7f5d88d-jg49r,Uid:5cfc5544-cb19-46e9-98ab-95d03c16b97a,Namespace:calico-system,Attempt:0,}" Mar 11 02:24:43.122164 containerd[1560]: time="2026-03-11T02:24:43.121900322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75c5945784-vvc5c,Uid:32818100-783d-4e9a-8ab2-cad80d846e18,Namespace:calico-system,Attempt:0,}" Mar 11 02:24:43.135176 kubelet[2645]: E0311 02:24:43.135119 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:43.135492 containerd[1560]: time="2026-03-11T02:24:43.135306292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6f69c4f8-rjhrp,Uid:a125c27c-9122-4fdb-a210-781209ab1769,Namespace:calico-system,Attempt:0,}" Mar 11 02:24:43.136990 kubelet[2645]: E0311 02:24:43.136823 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:43.137914 containerd[1560]: time="2026-03-11T02:24:43.137658879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m74p4,Uid:d34f0a4a-6b4f-4253-99ed-b4cbdf239525,Namespace:kube-system,Attempt:0,}" Mar 11 02:24:43.137914 containerd[1560]: time="2026-03-11T02:24:43.137709980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nrfsp,Uid:36343753-ac2b-410a-b28c-082e5d46c12d,Namespace:kube-system,Attempt:0,}" Mar 11 02:24:43.141283 containerd[1560]: time="2026-03-11T02:24:43.141111869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-mht4v,Uid:bc195321-fa68-41a6-b9ce-01e15b82c109,Namespace:calico-system,Attempt:0,}" Mar 11 02:24:43.147759 containerd[1560]: time="2026-03-11T02:24:43.147694527Z" level=info msg="StartContainer for \"0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510\" returns successfully" Mar 11 02:24:43.495258 containerd[1560]: time="2026-03-11T02:24:43.489706083Z" level=error msg="Failed to destroy network for sandbox \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.496056 containerd[1560]: time="2026-03-11T02:24:43.495682251Z" level=error msg="encountered an error cleaning up failed sandbox \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.505269 containerd[1560]: time="2026-03-11T02:24:43.505210793Z" level=error msg="Failed to destroy network for sandbox \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.507568 containerd[1560]: time="2026-03-11T02:24:43.507539213Z" level=error msg="encountered an error cleaning up failed sandbox \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.540812 containerd[1560]: time="2026-03-11T02:24:43.540754546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6f69c4f8-jnmsd,Uid:aa5ed09c-658b-4389-902b-dc4b31e7e361,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.541390 containerd[1560]: time="2026-03-11T02:24:43.541271216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b7f5d88d-jg49r,Uid:5cfc5544-cb19-46e9-98ab-95d03c16b97a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.556667 kubelet[2645]: E0311 02:24:43.556302 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.556667 kubelet[2645]: E0311 02:24:43.556474 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.556667 kubelet[2645]: E0311 02:24:43.556497 2645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f6f69c4f8-jnmsd" Mar 11 02:24:43.556667 kubelet[2645]: E0311 02:24:43.556561 2645 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f6f69c4f8-jnmsd" Mar 11 02:24:43.556912 kubelet[2645]: E0311 02:24:43.556637 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f6f69c4f8-jnmsd_calico-system(aa5ed09c-658b-4389-902b-dc4b31e7e361)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f6f69c4f8-jnmsd_calico-system(aa5ed09c-658b-4389-902b-dc4b31e7e361)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7f6f69c4f8-jnmsd" podUID="aa5ed09c-658b-4389-902b-dc4b31e7e361" Mar 11 02:24:43.556912 kubelet[2645]: E0311 02:24:43.556888 2645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54b7f5d88d-jg49r" Mar 11 02:24:43.557035 kubelet[2645]: E0311 02:24:43.556907 2645 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54b7f5d88d-jg49r" Mar 11 02:24:43.557185 kubelet[2645]: E0311 02:24:43.557136 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54b7f5d88d-jg49r_calico-system(5cfc5544-cb19-46e9-98ab-95d03c16b97a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54b7f5d88d-jg49r_calico-system(5cfc5544-cb19-46e9-98ab-95d03c16b97a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54b7f5d88d-jg49r" podUID="5cfc5544-cb19-46e9-98ab-95d03c16b97a" Mar 11 02:24:43.559969 containerd[1560]: time="2026-03-11T02:24:43.559894785Z" level=error msg="Failed to destroy network for sandbox \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.566791 containerd[1560]: time="2026-03-11T02:24:43.566664199Z" level=error msg="encountered an error cleaning up failed sandbox \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.566877 containerd[1560]: time="2026-03-11T02:24:43.566811072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6f69c4f8-rjhrp,Uid:a125c27c-9122-4fdb-a210-781209ab1769,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.568600 kubelet[2645]: E0311 02:24:43.568462 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.568600 kubelet[2645]: E0311 02:24:43.568572 2645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f6f69c4f8-rjhrp" Mar 11 02:24:43.568600 kubelet[2645]: E0311 02:24:43.568594 2645 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f6f69c4f8-rjhrp" Mar 11 02:24:43.569405 kubelet[2645]: E0311 02:24:43.569272 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f6f69c4f8-rjhrp_calico-system(a125c27c-9122-4fdb-a210-781209ab1769)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f6f69c4f8-rjhrp_calico-system(a125c27c-9122-4fdb-a210-781209ab1769)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7f6f69c4f8-rjhrp" podUID="a125c27c-9122-4fdb-a210-781209ab1769" Mar 11 02:24:43.572862 containerd[1560]: time="2026-03-11T02:24:43.572658621Z" level=error msg="Failed to destroy network for sandbox \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.574440 containerd[1560]: time="2026-03-11T02:24:43.574378310Z" level=error msg="encountered an error cleaning up failed sandbox \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.575362 containerd[1560]: time="2026-03-11T02:24:43.575160344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75c5945784-vvc5c,Uid:32818100-783d-4e9a-8ab2-cad80d846e18,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.575969 kubelet[2645]: E0311 02:24:43.575875 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.576429 kubelet[2645]: E0311 02:24:43.576168 2645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75c5945784-vvc5c" Mar 11 02:24:43.717859 containerd[1560]: 2026-03-11 02:24:43.627 [INFO][3736] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095" Mar 11 02:24:43.717859 containerd[1560]: 2026-03-11 02:24:43.627 [INFO][3736] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095" iface="eth0" netns="/var/run/netns/cni-512c6039-5515-903e-b653-46782f005495" Mar 11 02:24:43.717859 containerd[1560]: 2026-03-11 02:24:43.630 [INFO][3736] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095" iface="eth0" netns="/var/run/netns/cni-512c6039-5515-903e-b653-46782f005495" Mar 11 02:24:43.717859 containerd[1560]: 2026-03-11 02:24:43.633 [INFO][3736] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095" iface="eth0" netns="/var/run/netns/cni-512c6039-5515-903e-b653-46782f005495" Mar 11 02:24:43.717859 containerd[1560]: 2026-03-11 02:24:43.633 [INFO][3736] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095" Mar 11 02:24:43.717859 containerd[1560]: 2026-03-11 02:24:43.633 [INFO][3736] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095" Mar 11 02:24:43.717859 containerd[1560]: 2026-03-11 02:24:43.678 [INFO][3805] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095" HandleID="k8s-pod-network.362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095" Workload="localhost-k8s-coredns--674b8bbfcf--m74p4-eth0" Mar 11 02:24:43.717859 containerd[1560]: 2026-03-11 02:24:43.679 [INFO][3805] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:24:43.717859 containerd[1560]: 2026-03-11 02:24:43.679 [INFO][3805] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:24:43.717859 containerd[1560]: 2026-03-11 02:24:43.702 [WARNING][3805] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095" HandleID="k8s-pod-network.362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095" Workload="localhost-k8s-coredns--674b8bbfcf--m74p4-eth0" Mar 11 02:24:43.717859 containerd[1560]: 2026-03-11 02:24:43.702 [INFO][3805] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095" HandleID="k8s-pod-network.362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095" Workload="localhost-k8s-coredns--674b8bbfcf--m74p4-eth0" Mar 11 02:24:43.717859 containerd[1560]: 2026-03-11 02:24:43.706 [INFO][3805] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:24:43.717859 containerd[1560]: 2026-03-11 02:24:43.714 [INFO][3736] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095" Mar 11 02:24:43.730190 containerd[1560]: time="2026-03-11T02:24:43.730038829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m74p4,Uid:d34f0a4a-6b4f-4253-99ed-b4cbdf239525,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.732275 kubelet[2645]: E0311 02:24:43.732238 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.732554 kubelet[2645]: E0311 02:24:43.732509 2645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m74p4" Mar 11 02:24:43.732660 kubelet[2645]: E0311 02:24:43.732612 2645 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m74p4" Mar 11 02:24:43.732884 kubelet[2645]: E0311 02:24:43.732855 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-m74p4_kube-system(d34f0a4a-6b4f-4253-99ed-b4cbdf239525)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-m74p4_kube-system(d34f0a4a-6b4f-4253-99ed-b4cbdf239525)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"362ffdb8a1fa7ac88e889bcf848f6422459c01b7bd5157b17a207d128d301095\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-m74p4" podUID="d34f0a4a-6b4f-4253-99ed-b4cbdf239525" Mar 11 02:24:43.737674 containerd[1560]: 2026-03-11 02:24:43.629 [INFO][3780] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72" Mar 11 02:24:43.737674 containerd[1560]: 2026-03-11 02:24:43.630 [INFO][3780] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72" iface="eth0" netns="/var/run/netns/cni-3bbdda42-e318-6a90-defb-25ae005f743f" Mar 11 02:24:43.737674 containerd[1560]: 2026-03-11 02:24:43.633 [INFO][3780] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72" iface="eth0" netns="/var/run/netns/cni-3bbdda42-e318-6a90-defb-25ae005f743f" Mar 11 02:24:43.737674 containerd[1560]: 2026-03-11 02:24:43.636 [INFO][3780] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72" iface="eth0" netns="/var/run/netns/cni-3bbdda42-e318-6a90-defb-25ae005f743f" Mar 11 02:24:43.737674 containerd[1560]: 2026-03-11 02:24:43.636 [INFO][3780] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72" Mar 11 02:24:43.737674 containerd[1560]: 2026-03-11 02:24:43.636 [INFO][3780] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72" Mar 11 02:24:43.737674 containerd[1560]: 2026-03-11 02:24:43.685 [INFO][3811] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72" HandleID="k8s-pod-network.1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72" Workload="localhost-k8s-goldmane--5b85766d88--mht4v-eth0" Mar 11 02:24:43.737674 containerd[1560]: 2026-03-11 02:24:43.686 [INFO][3811] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:24:43.737674 containerd[1560]: 2026-03-11 02:24:43.706 [INFO][3811] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:24:43.737674 containerd[1560]: 2026-03-11 02:24:43.716 [WARNING][3811] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72" HandleID="k8s-pod-network.1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72" Workload="localhost-k8s-goldmane--5b85766d88--mht4v-eth0" Mar 11 02:24:43.737674 containerd[1560]: 2026-03-11 02:24:43.717 [INFO][3811] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72" HandleID="k8s-pod-network.1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72" Workload="localhost-k8s-goldmane--5b85766d88--mht4v-eth0" Mar 11 02:24:43.737674 containerd[1560]: 2026-03-11 02:24:43.720 [INFO][3811] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:24:43.737674 containerd[1560]: 2026-03-11 02:24:43.725 [INFO][3780] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72" Mar 11 02:24:43.742975 containerd[1560]: time="2026-03-11T02:24:43.742800257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-mht4v,Uid:bc195321-fa68-41a6-b9ce-01e15b82c109,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.743549 kubelet[2645]: E0311 02:24:43.743404 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.743549 kubelet[2645]: E0311 02:24:43.743512 2645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-mht4v" Mar 11 02:24:43.743629 kubelet[2645]: E0311 02:24:43.743550 2645 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-mht4v" Mar 11 02:24:43.743708 kubelet[2645]: E0311 02:24:43.743653 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-mht4v_calico-system(bc195321-fa68-41a6-b9ce-01e15b82c109)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-mht4v_calico-system(bc195321-fa68-41a6-b9ce-01e15b82c109)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fedb516d367782f347af9defb3f9888a88e482ebe01441c84f272681e389c72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-mht4v" podUID="bc195321-fa68-41a6-b9ce-01e15b82c109" Mar 11 02:24:43.747939 containerd[1560]: 2026-03-11 02:24:43.635 [INFO][3760] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9" Mar 11 02:24:43.747939 containerd[1560]: 2026-03-11 02:24:43.636 [INFO][3760] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9" iface="eth0" netns="/var/run/netns/cni-7f7d37e7-0001-6ee5-49af-52d3476d7300" Mar 11 02:24:43.747939 containerd[1560]: 2026-03-11 02:24:43.636 [INFO][3760] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9" iface="eth0" netns="/var/run/netns/cni-7f7d37e7-0001-6ee5-49af-52d3476d7300" Mar 11 02:24:43.747939 containerd[1560]: 2026-03-11 02:24:43.637 [INFO][3760] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9" iface="eth0" netns="/var/run/netns/cni-7f7d37e7-0001-6ee5-49af-52d3476d7300" Mar 11 02:24:43.747939 containerd[1560]: 2026-03-11 02:24:43.639 [INFO][3760] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9" Mar 11 02:24:43.747939 containerd[1560]: 2026-03-11 02:24:43.639 [INFO][3760] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9" Mar 11 02:24:43.747939 containerd[1560]: 2026-03-11 02:24:43.703 [INFO][3813] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9" HandleID="k8s-pod-network.0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9" Workload="localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0" Mar 11 02:24:43.747939 containerd[1560]: 2026-03-11 02:24:43.704 [INFO][3813] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:24:43.747939 containerd[1560]: 2026-03-11 02:24:43.720 [INFO][3813] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:24:43.747939 containerd[1560]: 2026-03-11 02:24:43.733 [WARNING][3813] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9" HandleID="k8s-pod-network.0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9" Workload="localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0" Mar 11 02:24:43.747939 containerd[1560]: 2026-03-11 02:24:43.733 [INFO][3813] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9" HandleID="k8s-pod-network.0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9" Workload="localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0" Mar 11 02:24:43.747939 containerd[1560]: 2026-03-11 02:24:43.737 [INFO][3813] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:24:43.747939 containerd[1560]: 2026-03-11 02:24:43.740 [INFO][3760] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9" Mar 11 02:24:43.753776 containerd[1560]: time="2026-03-11T02:24:43.753654532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nrfsp,Uid:36343753-ac2b-410a-b28c-082e5d46c12d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.753941 kubelet[2645]: E0311 02:24:43.753885 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:24:43.754022 kubelet[2645]: E0311 02:24:43.753954 2645 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nrfsp" Mar 11 02:24:43.754022 kubelet[2645]: E0311 02:24:43.753978 2645 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nrfsp" Mar 11 02:24:43.754149 kubelet[2645]: E0311 02:24:43.754017 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nrfsp_kube-system(36343753-ac2b-410a-b28c-082e5d46c12d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nrfsp_kube-system(36343753-ac2b-410a-b28c-082e5d46c12d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d21f9b590459f3c5cf288a35584353274607b704d0d5c6e6cc16523c81a80c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nrfsp" podUID="36343753-ac2b-410a-b28c-082e5d46c12d" Mar 11 02:24:43.775366 containerd[1560]: time="2026-03-11T02:24:43.775214989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fpgpj,Uid:93716d33-580c-4cc6-a4e6-074492c5ede3,Namespace:calico-system,Attempt:0,}" Mar 11 02:24:43.980760 systemd-networkd[1248]: cali26edd5cf585: Link UP Mar 11 02:24:43.981962 systemd-networkd[1248]: cali26edd5cf585: Gained carrier Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.825 [ERROR][3833] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.840 [INFO][3833] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fpgpj-eth0 csi-node-driver- calico-system 93716d33-580c-4cc6-a4e6-074492c5ede3 648 0 2026-03-11 02:24:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fpgpj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali26edd5cf585 [] [] }} ContainerID="97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" Namespace="calico-system" Pod="csi-node-driver-fpgpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fpgpj-" Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.841 [INFO][3833] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" Namespace="calico-system" Pod="csi-node-driver-fpgpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fpgpj-eth0" Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.890 [INFO][3848] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" HandleID="k8s-pod-network.97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" Workload="localhost-k8s-csi--node--driver--fpgpj-eth0" Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.900 [INFO][3848] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" HandleID="k8s-pod-network.97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" Workload="localhost-k8s-csi--node--driver--fpgpj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f920), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fpgpj", "timestamp":"2026-03-11 02:24:43.890162194 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001962c0)} Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.901 [INFO][3848] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.901 [INFO][3848] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.901 [INFO][3848] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.905 [INFO][3848] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" host="localhost" Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.927 [INFO][3848] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.936 [INFO][3848] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.939 [INFO][3848] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.942 [INFO][3848] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.942 [INFO][3848] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" host="localhost" Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.946 [INFO][3848] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.950 [INFO][3848] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" host="localhost" Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.958 [INFO][3848] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" host="localhost" Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.958 [INFO][3848] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" host="localhost" Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.958 [INFO][3848] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:24:44.003388 containerd[1560]: 2026-03-11 02:24:43.958 [INFO][3848] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" HandleID="k8s-pod-network.97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" Workload="localhost-k8s-csi--node--driver--fpgpj-eth0" Mar 11 02:24:44.005514 containerd[1560]: 2026-03-11 02:24:43.961 [INFO][3833] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" Namespace="calico-system" Pod="csi-node-driver-fpgpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fpgpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fpgpj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"93716d33-580c-4cc6-a4e6-074492c5ede3", ResourceVersion:"648", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fpgpj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26edd5cf585", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:44.005514 containerd[1560]: 2026-03-11 02:24:43.961 [INFO][3833] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" Namespace="calico-system" Pod="csi-node-driver-fpgpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fpgpj-eth0" Mar 11 02:24:44.005514 containerd[1560]: 2026-03-11 02:24:43.961 [INFO][3833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26edd5cf585 ContainerID="97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" Namespace="calico-system" Pod="csi-node-driver-fpgpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fpgpj-eth0" Mar 11 02:24:44.005514 containerd[1560]: 2026-03-11 02:24:43.982 [INFO][3833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" Namespace="calico-system" Pod="csi-node-driver-fpgpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fpgpj-eth0" Mar 11 02:24:44.005514 containerd[1560]: 2026-03-11 02:24:43.983 [INFO][3833] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" Namespace="calico-system" Pod="csi-node-driver-fpgpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fpgpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fpgpj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"93716d33-580c-4cc6-a4e6-074492c5ede3", ResourceVersion:"648", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c", Pod:"csi-node-driver-fpgpj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26edd5cf585", MAC:"9e:0c:b1:fb:b5:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:44.005514 containerd[1560]: 2026-03-11 02:24:43.998 [INFO][3833] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c" Namespace="calico-system" Pod="csi-node-driver-fpgpj" WorkloadEndpoint="localhost-k8s-csi--node--driver--fpgpj-eth0" Mar 11 02:24:44.005946 kubelet[2645]: I0311 02:24:44.005922 2645 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Mar 11 02:24:44.010445 kubelet[2645]: I0311 02:24:44.010259 2645 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Mar 11 02:24:44.014043 kubelet[2645]: I0311 02:24:44.013495 2645 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Mar 11 02:24:44.018543 containerd[1560]: time="2026-03-11T02:24:44.018505820Z" level=info msg="StopPodSandbox for \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\"" Mar 11 02:24:44.019138 kubelet[2645]: I0311 02:24:44.019048 2645 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Mar 11 02:24:44.019386 containerd[1560]: time="2026-03-11T02:24:44.019254014Z" level=info msg="StopPodSandbox for \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\"" Mar 11 02:24:44.021170 containerd[1560]: time="2026-03-11T02:24:44.019863749Z" level=info msg="StopPodSandbox for \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\"" Mar 11 02:24:44.021170 containerd[1560]: time="2026-03-11T02:24:44.021130635Z" level=info msg="Ensure that sandbox a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103 in task-service has been cleanup successfully" Mar 11 02:24:44.021290 containerd[1560]: time="2026-03-11T02:24:44.021177633Z" level=info msg="Ensure that sandbox b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049 in task-service has been cleanup successfully" Mar 11 02:24:44.022490 kubelet[2645]: E0311 02:24:44.021496 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:44.022490 kubelet[2645]: E0311 02:24:44.022180 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:44.025933 containerd[1560]: time="2026-03-11T02:24:44.025896061Z" level=info msg="Ensure that sandbox 839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5 in task-service has been cleanup successfully" Mar 11 02:24:44.031934 containerd[1560]: time="2026-03-11T02:24:44.026195268Z" level=info msg="StopPodSandbox for \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\"" Mar 11 02:24:44.031934 containerd[1560]: time="2026-03-11T02:24:44.031779719Z" level=info msg="Ensure that sandbox fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11 in task-service has been cleanup successfully" Mar 11 02:24:44.033515 containerd[1560]: time="2026-03-11T02:24:44.026259436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m74p4,Uid:d34f0a4a-6b4f-4253-99ed-b4cbdf239525,Namespace:kube-system,Attempt:0,}" Mar 11 02:24:44.034676 containerd[1560]: time="2026-03-11T02:24:44.026284513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nrfsp,Uid:36343753-ac2b-410a-b28c-082e5d46c12d,Namespace:kube-system,Attempt:0,}" Mar 11 02:24:44.035896 containerd[1560]: time="2026-03-11T02:24:44.026367678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-mht4v,Uid:bc195321-fa68-41a6-b9ce-01e15b82c109,Namespace:calico-system,Attempt:0,}" Mar 11 02:24:44.067767 containerd[1560]: time="2026-03-11T02:24:44.067406613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:44.067767 containerd[1560]: time="2026-03-11T02:24:44.067470090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:44.067767 containerd[1560]: time="2026-03-11T02:24:44.067497692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:44.067767 containerd[1560]: time="2026-03-11T02:24:44.067602366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:44.172928 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:24:44.193215 kubelet[2645]: I0311 02:24:44.192498 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bb67r" podStartSLOduration=3.220473868 podStartE2EDuration="12.192476607s" podCreationTimestamp="2026-03-11 02:24:32 +0000 UTC" firstStartedPulling="2026-03-11 02:24:32.811734781 +0000 UTC m=+15.228320234" lastFinishedPulling="2026-03-11 02:24:41.78373752 +0000 UTC m=+24.200322973" observedRunningTime="2026-03-11 02:24:44.030525494 +0000 UTC m=+26.447110947" watchObservedRunningTime="2026-03-11 02:24:44.192476607 +0000 UTC m=+26.609062090" Mar 11 02:24:44.236550 containerd[1560]: time="2026-03-11T02:24:44.233654918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fpgpj,Uid:93716d33-580c-4cc6-a4e6-074492c5ede3,Namespace:calico-system,Attempt:0,} returns sandbox id \"97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c\"" Mar 11 02:24:44.243105 containerd[1560]: time="2026-03-11T02:24:44.242770202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 11 02:24:44.354799 containerd[1560]: 2026-03-11 02:24:44.190 [INFO][3937] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Mar 11 02:24:44.354799 containerd[1560]: 2026-03-11 02:24:44.192 [INFO][3937] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" iface="eth0" netns="/var/run/netns/cni-47a8be5a-189d-8e16-1f46-8b7422b459b3" Mar 11 02:24:44.354799 containerd[1560]: 2026-03-11 02:24:44.193 [INFO][3937] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" iface="eth0" netns="/var/run/netns/cni-47a8be5a-189d-8e16-1f46-8b7422b459b3" Mar 11 02:24:44.354799 containerd[1560]: 2026-03-11 02:24:44.193 [INFO][3937] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" iface="eth0" netns="/var/run/netns/cni-47a8be5a-189d-8e16-1f46-8b7422b459b3" Mar 11 02:24:44.354799 containerd[1560]: 2026-03-11 02:24:44.193 [INFO][3937] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Mar 11 02:24:44.354799 containerd[1560]: 2026-03-11 02:24:44.193 [INFO][3937] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Mar 11 02:24:44.354799 containerd[1560]: 2026-03-11 02:24:44.321 [INFO][4024] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" HandleID="k8s-pod-network.fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:24:44.354799 containerd[1560]: 2026-03-11 02:24:44.322 [INFO][4024] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:24:44.354799 containerd[1560]: 2026-03-11 02:24:44.322 [INFO][4024] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:24:44.354799 containerd[1560]: 2026-03-11 02:24:44.332 [WARNING][4024] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" HandleID="k8s-pod-network.fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:24:44.354799 containerd[1560]: 2026-03-11 02:24:44.332 [INFO][4024] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" HandleID="k8s-pod-network.fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:24:44.354799 containerd[1560]: 2026-03-11 02:24:44.337 [INFO][4024] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:24:44.354799 containerd[1560]: 2026-03-11 02:24:44.351 [INFO][3937] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Mar 11 02:24:44.356858 containerd[1560]: time="2026-03-11T02:24:44.355775093Z" level=info msg="TearDown network for sandbox \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\" successfully" Mar 11 02:24:44.356858 containerd[1560]: time="2026-03-11T02:24:44.355812923Z" level=info msg="StopPodSandbox for \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\" returns successfully" Mar 11 02:24:44.363420 containerd[1560]: time="2026-03-11T02:24:44.363293674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6f69c4f8-jnmsd,Uid:aa5ed09c-658b-4389-902b-dc4b31e7e361,Namespace:calico-system,Attempt:1,}" Mar 11 02:24:44.455111 systemd-networkd[1248]: cali6a343b61fcc: Link UP Mar 11 02:24:44.458198 systemd-networkd[1248]: cali6a343b61fcc: Gained carrier Mar 11 02:24:44.471391 containerd[1560]: 2026-03-11 02:24:44.235 [INFO][3929] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Mar 11 02:24:44.471391 containerd[1560]: 2026-03-11 02:24:44.237 [INFO][3929] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" iface="eth0" netns="/var/run/netns/cni-87765698-0fd8-676d-ec75-de9793f3a2d2" Mar 11 02:24:44.471391 containerd[1560]: 2026-03-11 02:24:44.237 [INFO][3929] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" iface="eth0" netns="/var/run/netns/cni-87765698-0fd8-676d-ec75-de9793f3a2d2" Mar 11 02:24:44.471391 containerd[1560]: 2026-03-11 02:24:44.238 [INFO][3929] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" iface="eth0" netns="/var/run/netns/cni-87765698-0fd8-676d-ec75-de9793f3a2d2" Mar 11 02:24:44.471391 containerd[1560]: 2026-03-11 02:24:44.238 [INFO][3929] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Mar 11 02:24:44.471391 containerd[1560]: 2026-03-11 02:24:44.241 [INFO][3929] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Mar 11 02:24:44.471391 containerd[1560]: 2026-03-11 02:24:44.372 [INFO][4044] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" HandleID="k8s-pod-network.b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Workload="localhost-k8s-whisker--75c5945784--vvc5c-eth0" Mar 11 02:24:44.471391 containerd[1560]: 2026-03-11 02:24:44.373 [INFO][4044] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:24:44.471391 containerd[1560]: 2026-03-11 02:24:44.441 [INFO][4044] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:24:44.471391 containerd[1560]: 2026-03-11 02:24:44.452 [WARNING][4044] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" HandleID="k8s-pod-network.b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Workload="localhost-k8s-whisker--75c5945784--vvc5c-eth0" Mar 11 02:24:44.471391 containerd[1560]: 2026-03-11 02:24:44.452 [INFO][4044] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" HandleID="k8s-pod-network.b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Workload="localhost-k8s-whisker--75c5945784--vvc5c-eth0" Mar 11 02:24:44.471391 containerd[1560]: 2026-03-11 02:24:44.454 [INFO][4044] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:24:44.471391 containerd[1560]: 2026-03-11 02:24:44.462 [INFO][3929] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Mar 11 02:24:44.478423 containerd[1560]: time="2026-03-11T02:24:44.471494528Z" level=info msg="TearDown network for sandbox \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\" successfully" Mar 11 02:24:44.478423 containerd[1560]: time="2026-03-11T02:24:44.471524314Z" level=info msg="StopPodSandbox for \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\" returns successfully" Mar 11 02:24:44.485482 containerd[1560]: 2026-03-11 02:24:44.293 [INFO][3943] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Mar 11 02:24:44.485482 containerd[1560]: 2026-03-11 02:24:44.294 [INFO][3943] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" iface="eth0" netns="/var/run/netns/cni-e9236271-a931-591e-53b3-ab92a61b9929" Mar 11 02:24:44.485482 containerd[1560]: 2026-03-11 02:24:44.295 [INFO][3943] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" iface="eth0" netns="/var/run/netns/cni-e9236271-a931-591e-53b3-ab92a61b9929" Mar 11 02:24:44.485482 containerd[1560]: 2026-03-11 02:24:44.295 [INFO][3943] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" iface="eth0" netns="/var/run/netns/cni-e9236271-a931-591e-53b3-ab92a61b9929" Mar 11 02:24:44.485482 containerd[1560]: 2026-03-11 02:24:44.295 [INFO][3943] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Mar 11 02:24:44.485482 containerd[1560]: 2026-03-11 02:24:44.296 [INFO][3943] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Mar 11 02:24:44.485482 containerd[1560]: 2026-03-11 02:24:44.382 [INFO][4076] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" HandleID="k8s-pod-network.839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Workload="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:24:44.485482 containerd[1560]: 2026-03-11 02:24:44.382 [INFO][4076] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:24:44.485482 containerd[1560]: 2026-03-11 02:24:44.454 [INFO][4076] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:24:44.485482 containerd[1560]: 2026-03-11 02:24:44.462 [WARNING][4076] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" HandleID="k8s-pod-network.839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Workload="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:24:44.485482 containerd[1560]: 2026-03-11 02:24:44.462 [INFO][4076] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" HandleID="k8s-pod-network.839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Workload="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:24:44.485482 containerd[1560]: 2026-03-11 02:24:44.466 [INFO][4076] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:24:44.485482 containerd[1560]: 2026-03-11 02:24:44.471 [INFO][3943] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Mar 11 02:24:44.487875 containerd[1560]: time="2026-03-11T02:24:44.485977479Z" level=info msg="TearDown network for sandbox \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\" successfully" Mar 11 02:24:44.487875 containerd[1560]: time="2026-03-11T02:24:44.486088175Z" level=info msg="StopPodSandbox for \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\" returns successfully" Mar 11 02:24:44.490607 containerd[1560]: time="2026-03-11T02:24:44.490306364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b7f5d88d-jg49r,Uid:5cfc5544-cb19-46e9-98ab-95d03c16b97a,Namespace:calico-system,Attempt:1,}" Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.197 [ERROR][3966] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.222 [INFO][3966] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--m74p4-eth0 coredns-674b8bbfcf- kube-system d34f0a4a-6b4f-4253-99ed-b4cbdf239525 903 0 2026-03-11 02:24:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-m74p4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6a343b61fcc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m74p4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m74p4-" Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.222 [INFO][3966] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m74p4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m74p4-eth0" Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.339 [INFO][4041] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" HandleID="k8s-pod-network.9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" Workload="localhost-k8s-coredns--674b8bbfcf--m74p4-eth0" Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.366 [INFO][4041] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" HandleID="k8s-pod-network.9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" Workload="localhost-k8s-coredns--674b8bbfcf--m74p4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000668340), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-m74p4", "timestamp":"2026-03-11 02:24:44.339918489 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000297ce0)} Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.366 [INFO][4041] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.366 [INFO][4041] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.367 [INFO][4041] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.373 [INFO][4041] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" host="localhost" Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.387 [INFO][4041] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.400 [INFO][4041] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.405 [INFO][4041] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.408 [INFO][4041] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.408 [INFO][4041] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" host="localhost" Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.412 [INFO][4041] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.422 [INFO][4041] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" host="localhost" Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.441 [INFO][4041] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" host="localhost" Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.441 [INFO][4041] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" host="localhost" Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.441 [INFO][4041] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:24:44.505642 containerd[1560]: 2026-03-11 02:24:44.441 [INFO][4041] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" HandleID="k8s-pod-network.9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" Workload="localhost-k8s-coredns--674b8bbfcf--m74p4-eth0" Mar 11 02:24:44.506617 containerd[1560]: 2026-03-11 02:24:44.446 [INFO][3966] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m74p4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m74p4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--m74p4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d34f0a4a-6b4f-4253-99ed-b4cbdf239525", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-m74p4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a343b61fcc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:44.506617 containerd[1560]: 2026-03-11 02:24:44.447 [INFO][3966] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m74p4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m74p4-eth0" Mar 11 02:24:44.506617 containerd[1560]: 2026-03-11 02:24:44.447 [INFO][3966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a343b61fcc ContainerID="9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m74p4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m74p4-eth0" Mar 11 02:24:44.506617 containerd[1560]: 2026-03-11 02:24:44.462 [INFO][3966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m74p4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m74p4-eth0" Mar 11 02:24:44.506617 containerd[1560]: 2026-03-11 02:24:44.463 [INFO][3966] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m74p4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m74p4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--m74p4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d34f0a4a-6b4f-4253-99ed-b4cbdf239525", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b", Pod:"coredns-674b8bbfcf-m74p4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a343b61fcc", MAC:"9a:96:5e:fb:63:40", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:44.506617 containerd[1560]: 2026-03-11 02:24:44.497 [INFO][3966] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b" Namespace="kube-system" Pod="coredns-674b8bbfcf-m74p4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--m74p4-eth0" Mar 11 02:24:44.512056 containerd[1560]: 2026-03-11 02:24:44.257 [INFO][3924] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Mar 11 02:24:44.512056 containerd[1560]: 2026-03-11 02:24:44.257 [INFO][3924] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" iface="eth0" netns="/var/run/netns/cni-a863d912-1b20-70bd-f284-c8d4880e5227" Mar 11 02:24:44.512056 containerd[1560]: 2026-03-11 02:24:44.259 [INFO][3924] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" iface="eth0" netns="/var/run/netns/cni-a863d912-1b20-70bd-f284-c8d4880e5227" Mar 11 02:24:44.512056 containerd[1560]: 2026-03-11 02:24:44.260 [INFO][3924] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" iface="eth0" netns="/var/run/netns/cni-a863d912-1b20-70bd-f284-c8d4880e5227" Mar 11 02:24:44.512056 containerd[1560]: 2026-03-11 02:24:44.260 [INFO][3924] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Mar 11 02:24:44.512056 containerd[1560]: 2026-03-11 02:24:44.260 [INFO][3924] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Mar 11 02:24:44.512056 containerd[1560]: 2026-03-11 02:24:44.387 [INFO][4058] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" HandleID="k8s-pod-network.a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:24:44.512056 containerd[1560]: 2026-03-11 02:24:44.387 [INFO][4058] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:24:44.512056 containerd[1560]: 2026-03-11 02:24:44.467 [INFO][4058] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:24:44.512056 containerd[1560]: 2026-03-11 02:24:44.492 [WARNING][4058] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" HandleID="k8s-pod-network.a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:24:44.512056 containerd[1560]: 2026-03-11 02:24:44.492 [INFO][4058] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" HandleID="k8s-pod-network.a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:24:44.512056 containerd[1560]: 2026-03-11 02:24:44.495 [INFO][4058] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:24:44.512056 containerd[1560]: 2026-03-11 02:24:44.502 [INFO][3924] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Mar 11 02:24:44.514450 containerd[1560]: time="2026-03-11T02:24:44.513792142Z" level=info msg="TearDown network for sandbox \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\" successfully" Mar 11 02:24:44.514523 containerd[1560]: time="2026-03-11T02:24:44.514465129Z" level=info msg="StopPodSandbox for \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\" returns successfully" Mar 11 02:24:44.515784 containerd[1560]: time="2026-03-11T02:24:44.515684758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6f69c4f8-rjhrp,Uid:a125c27c-9122-4fdb-a210-781209ab1769,Namespace:calico-system,Attempt:1,}" Mar 11 02:24:44.564439 containerd[1560]: time="2026-03-11T02:24:44.564179787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:44.564439 containerd[1560]: time="2026-03-11T02:24:44.564292015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:44.564623 kubelet[2645]: I0311 02:24:44.564489 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db4d7\" (UniqueName: \"kubernetes.io/projected/32818100-783d-4e9a-8ab2-cad80d846e18-kube-api-access-db4d7\") pod \"32818100-783d-4e9a-8ab2-cad80d846e18\" (UID: \"32818100-783d-4e9a-8ab2-cad80d846e18\") " Mar 11 02:24:44.564623 kubelet[2645]: I0311 02:24:44.564530 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/32818100-783d-4e9a-8ab2-cad80d846e18-whisker-backend-key-pair\") pod \"32818100-783d-4e9a-8ab2-cad80d846e18\" (UID: \"32818100-783d-4e9a-8ab2-cad80d846e18\") " Mar 11 02:24:44.564623 kubelet[2645]: I0311 02:24:44.564560 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/32818100-783d-4e9a-8ab2-cad80d846e18-nginx-config\") pod \"32818100-783d-4e9a-8ab2-cad80d846e18\" (UID: \"32818100-783d-4e9a-8ab2-cad80d846e18\") " Mar 11 02:24:44.564623 kubelet[2645]: I0311 02:24:44.564579 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32818100-783d-4e9a-8ab2-cad80d846e18-whisker-ca-bundle\") pod \"32818100-783d-4e9a-8ab2-cad80d846e18\" (UID: \"32818100-783d-4e9a-8ab2-cad80d846e18\") " Mar 11 02:24:44.565206 kubelet[2645]: I0311 02:24:44.565129 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32818100-783d-4e9a-8ab2-cad80d846e18-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "32818100-783d-4e9a-8ab2-cad80d846e18" (UID: "32818100-783d-4e9a-8ab2-cad80d846e18"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 11 02:24:44.569383 kubelet[2645]: I0311 02:24:44.568394 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32818100-783d-4e9a-8ab2-cad80d846e18-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "32818100-783d-4e9a-8ab2-cad80d846e18" (UID: "32818100-783d-4e9a-8ab2-cad80d846e18"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 11 02:24:44.573526 kubelet[2645]: I0311 02:24:44.573449 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32818100-783d-4e9a-8ab2-cad80d846e18-kube-api-access-db4d7" (OuterVolumeSpecName: "kube-api-access-db4d7") pod "32818100-783d-4e9a-8ab2-cad80d846e18" (UID: "32818100-783d-4e9a-8ab2-cad80d846e18"). InnerVolumeSpecName "kube-api-access-db4d7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 11 02:24:44.574521 containerd[1560]: time="2026-03-11T02:24:44.566275213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:44.574521 containerd[1560]: time="2026-03-11T02:24:44.566491345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:44.575177 kubelet[2645]: I0311 02:24:44.575110 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32818100-783d-4e9a-8ab2-cad80d846e18-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "32818100-783d-4e9a-8ab2-cad80d846e18" (UID: "32818100-783d-4e9a-8ab2-cad80d846e18"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 11 02:24:44.580005 systemd-networkd[1248]: cali48bb12270a0: Link UP Mar 11 02:24:44.582580 systemd-networkd[1248]: cali48bb12270a0: Gained carrier Mar 11 02:24:44.666220 kubelet[2645]: I0311 02:24:44.665948 2645 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32818100-783d-4e9a-8ab2-cad80d846e18-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 11 02:24:44.666220 kubelet[2645]: I0311 02:24:44.666026 2645 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-db4d7\" (UniqueName: \"kubernetes.io/projected/32818100-783d-4e9a-8ab2-cad80d846e18-kube-api-access-db4d7\") on node \"localhost\" DevicePath \"\"" Mar 11 02:24:44.666220 kubelet[2645]: I0311 02:24:44.666047 2645 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/32818100-783d-4e9a-8ab2-cad80d846e18-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 11 02:24:44.666220 kubelet[2645]: I0311 02:24:44.666096 2645 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/32818100-783d-4e9a-8ab2-cad80d846e18-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.212 [ERROR][3965] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.238 [INFO][3965] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0 coredns-674b8bbfcf- kube-system 36343753-ac2b-410a-b28c-082e5d46c12d 905 0 2026-03-11 02:24:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-nrfsp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali48bb12270a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrfsp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrfsp-" Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.241 [INFO][3965] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrfsp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0" Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.386 [INFO][4059] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" HandleID="k8s-pod-network.3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" Workload="localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0" Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.403 [INFO][4059] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" HandleID="k8s-pod-network.3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" Workload="localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059fe40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-nrfsp", "timestamp":"2026-03-11 02:24:44.386579981 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000b14a0)} Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.403 [INFO][4059] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.495 [INFO][4059] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.496 [INFO][4059] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.504 [INFO][4059] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" host="localhost" Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.515 [INFO][4059] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.522 [INFO][4059] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.527 [INFO][4059] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.531 [INFO][4059] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.531 [INFO][4059] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" host="localhost" Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.534 [INFO][4059] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696 Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.545 [INFO][4059] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" host="localhost" Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.558 [INFO][4059] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" host="localhost" Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.558 [INFO][4059] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" host="localhost" Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.558 [INFO][4059] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:24:44.682938 containerd[1560]: 2026-03-11 02:24:44.558 [INFO][4059] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" HandleID="k8s-pod-network.3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" Workload="localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0" Mar 11 02:24:44.684667 containerd[1560]: 2026-03-11 02:24:44.563 [INFO][3965] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrfsp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"36343753-ac2b-410a-b28c-082e5d46c12d", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-nrfsp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48bb12270a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:44.684667 containerd[1560]: 2026-03-11 02:24:44.563 [INFO][3965] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrfsp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0" Mar 11 02:24:44.684667 containerd[1560]: 2026-03-11 02:24:44.563 [INFO][3965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48bb12270a0 ContainerID="3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrfsp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0" Mar 11 02:24:44.684667 containerd[1560]: 2026-03-11 02:24:44.585 [INFO][3965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrfsp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0" Mar 11 02:24:44.684667 containerd[1560]: 2026-03-11 02:24:44.586 [INFO][3965] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrfsp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"36343753-ac2b-410a-b28c-082e5d46c12d", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696", Pod:"coredns-674b8bbfcf-nrfsp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48bb12270a0", MAC:"82:09:a0:13:80:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:44.684667 containerd[1560]: 2026-03-11 02:24:44.622 [INFO][3965] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrfsp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrfsp-eth0" Mar 11 02:24:44.683576 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:24:44.737527 systemd-networkd[1248]: cali20d4f2e1c64: Link UP Mar 11 02:24:44.737851 systemd-networkd[1248]: cali20d4f2e1c64: Gained carrier Mar 11 02:24:44.771050 containerd[1560]: time="2026-03-11T02:24:44.766162418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:44.771050 containerd[1560]: time="2026-03-11T02:24:44.767915962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:44.771050 containerd[1560]: time="2026-03-11T02:24:44.767944274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:44.771050 containerd[1560]: time="2026-03-11T02:24:44.768114411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.198 [ERROR][3955] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.239 [INFO][3955] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--mht4v-eth0 goldmane-5b85766d88- calico-system bc195321-fa68-41a6-b9ce-01e15b82c109 904 0 2026-03-11 02:24:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-mht4v eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali20d4f2e1c64 [] [] }} ContainerID="a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" Namespace="calico-system" Pod="goldmane-5b85766d88-mht4v" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mht4v-" Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.239 [INFO][3955] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" Namespace="calico-system" Pod="goldmane-5b85766d88-mht4v" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mht4v-eth0" Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.390 [INFO][4065] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" HandleID="k8s-pod-network.a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" Workload="localhost-k8s-goldmane--5b85766d88--mht4v-eth0" Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.406 [INFO][4065] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" HandleID="k8s-pod-network.a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" Workload="localhost-k8s-goldmane--5b85766d88--mht4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139880), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-mht4v", "timestamp":"2026-03-11 02:24:44.390617526 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002f9b80)} Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.407 [INFO][4065] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.559 [INFO][4065] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.559 [INFO][4065] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.628 [INFO][4065] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" host="localhost" Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.648 [INFO][4065] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.684 [INFO][4065] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.689 [INFO][4065] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.695 [INFO][4065] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.695 [INFO][4065] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" host="localhost" Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.698 [INFO][4065] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82 Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.706 [INFO][4065] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" host="localhost" Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.719 [INFO][4065] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" host="localhost" Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.720 [INFO][4065] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" host="localhost" Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.721 [INFO][4065] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:24:44.783949 containerd[1560]: 2026-03-11 02:24:44.721 [INFO][4065] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" HandleID="k8s-pod-network.a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" Workload="localhost-k8s-goldmane--5b85766d88--mht4v-eth0" Mar 11 02:24:44.784694 containerd[1560]: 2026-03-11 02:24:44.728 [INFO][3955] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" Namespace="calico-system" Pod="goldmane-5b85766d88-mht4v" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mht4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--mht4v-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"bc195321-fa68-41a6-b9ce-01e15b82c109", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-mht4v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20d4f2e1c64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:44.784694 containerd[1560]: 2026-03-11 02:24:44.728 [INFO][3955] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" Namespace="calico-system" Pod="goldmane-5b85766d88-mht4v" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mht4v-eth0" Mar 11 02:24:44.784694 containerd[1560]: 2026-03-11 02:24:44.728 [INFO][3955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20d4f2e1c64 ContainerID="a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" Namespace="calico-system" Pod="goldmane-5b85766d88-mht4v" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mht4v-eth0" Mar 11 02:24:44.784694 containerd[1560]: 2026-03-11 02:24:44.736 [INFO][3955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" Namespace="calico-system" Pod="goldmane-5b85766d88-mht4v" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mht4v-eth0" Mar 11 02:24:44.784694 containerd[1560]: 2026-03-11 02:24:44.750 [INFO][3955] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" Namespace="calico-system" Pod="goldmane-5b85766d88-mht4v" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mht4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--mht4v-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"bc195321-fa68-41a6-b9ce-01e15b82c109", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82", Pod:"goldmane-5b85766d88-mht4v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20d4f2e1c64", MAC:"6e:31:4f:00:9d:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:44.784694 containerd[1560]: 2026-03-11 02:24:44.772 [INFO][3955] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82" Namespace="calico-system" Pod="goldmane-5b85766d88-mht4v" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--mht4v-eth0" Mar 11 02:24:44.788882 containerd[1560]: time="2026-03-11T02:24:44.788220853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m74p4,Uid:d34f0a4a-6b4f-4253-99ed-b4cbdf239525,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b\"" Mar 11 02:24:44.793902 kubelet[2645]: E0311 02:24:44.793251 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:44.801515 containerd[1560]: time="2026-03-11T02:24:44.801114324Z" level=info msg="CreateContainer within sandbox \"9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 02:24:44.853888 containerd[1560]: time="2026-03-11T02:24:44.853832643Z" level=info msg="CreateContainer within sandbox \"9f74c07a16ad0f849610dbc996d23b17b763eca4a2cefa6a6cea424dc5f1c09b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80e5963f061327cb4161a9fa71c45cd8dc089c9ded202c7be135e75ea7a8822d\"" Mar 11 02:24:44.857464 systemd-networkd[1248]: cali005e303f3e5: Link UP Mar 11 02:24:44.857875 systemd-networkd[1248]: cali005e303f3e5: Gained carrier Mar 11 02:24:44.866414 containerd[1560]: time="2026-03-11T02:24:44.862620504Z" level=info msg="StartContainer for \"80e5963f061327cb4161a9fa71c45cd8dc089c9ded202c7be135e75ea7a8822d\"" Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.438 [ERROR][4095] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.466 [INFO][4095] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0 calico-apiserver-7f6f69c4f8- calico-system aa5ed09c-658b-4389-902b-dc4b31e7e361 925 0 2026-03-11 02:24:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f6f69c4f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f6f69c4f8-jnmsd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali005e303f3e5 [] [] }} ContainerID="122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-jnmsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-" Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.466 [INFO][4095] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-jnmsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.542 [INFO][4113] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" HandleID="k8s-pod-network.122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.554 [INFO][4113] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" HandleID="k8s-pod-network.122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00051cae0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7f6f69c4f8-jnmsd", "timestamp":"2026-03-11 02:24:44.542461216 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005666e0)} Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.554 [INFO][4113] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.721 [INFO][4113] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.722 [INFO][4113] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.730 [INFO][4113] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" host="localhost" Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.743 [INFO][4113] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.764 [INFO][4113] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.771 [INFO][4113] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.780 [INFO][4113] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.793 [INFO][4113] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" host="localhost" Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.808 [INFO][4113] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.819 [INFO][4113] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" host="localhost" Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.829 [INFO][4113] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" host="localhost" Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.829 [INFO][4113] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" host="localhost" Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.829 [INFO][4113] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:24:44.921405 containerd[1560]: 2026-03-11 02:24:44.829 [INFO][4113] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" HandleID="k8s-pod-network.122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:24:44.921930 containerd[1560]: 2026-03-11 02:24:44.833 [INFO][4095] cni-plugin/k8s.go 418: Populated endpoint ContainerID="122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-jnmsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0", GenerateName:"calico-apiserver-7f6f69c4f8-", Namespace:"calico-system", SelfLink:"", UID:"aa5ed09c-658b-4389-902b-dc4b31e7e361", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6f69c4f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f6f69c4f8-jnmsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali005e303f3e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:44.921930 containerd[1560]: 2026-03-11 02:24:44.833 [INFO][4095] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-jnmsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:24:44.921930 containerd[1560]: 2026-03-11 02:24:44.833 [INFO][4095] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali005e303f3e5 ContainerID="122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-jnmsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:24:44.921930 containerd[1560]: 2026-03-11 02:24:44.868 [INFO][4095] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-jnmsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:24:44.921930 containerd[1560]: 2026-03-11 02:24:44.870 [INFO][4095] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-jnmsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0", GenerateName:"calico-apiserver-7f6f69c4f8-", Namespace:"calico-system", SelfLink:"", UID:"aa5ed09c-658b-4389-902b-dc4b31e7e361", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6f69c4f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c", Pod:"calico-apiserver-7f6f69c4f8-jnmsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali005e303f3e5", MAC:"1a:ab:cd:34:91:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:44.921930 containerd[1560]: 2026-03-11 02:24:44.902 [INFO][4095] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-jnmsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:24:44.925723 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:24:44.936229 containerd[1560]: time="2026-03-11T02:24:44.935910057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:44.936229 containerd[1560]: time="2026-03-11T02:24:44.936020241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:44.936229 containerd[1560]: time="2026-03-11T02:24:44.936095100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:44.936831 containerd[1560]: time="2026-03-11T02:24:44.936697321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:44.969727 systemd[1]: run-netns-cni\x2da863d912\x2d1b20\x2d70bd\x2df284\x2dc8d4880e5227.mount: Deactivated successfully. Mar 11 02:24:44.969975 systemd[1]: run-netns-cni\x2d87765698\x2d0fd8\x2d676d\x2dec75\x2dde9793f3a2d2.mount: Deactivated successfully. Mar 11 02:24:44.970219 systemd[1]: run-netns-cni\x2de9236271\x2da931\x2d591e\x2d53b3\x2dab92a61b9929.mount: Deactivated successfully. Mar 11 02:24:44.971145 systemd[1]: run-netns-cni\x2d47a8be5a\x2d189d\x2d8e16\x2d1f46\x2d8b7422b459b3.mount: Deactivated successfully. Mar 11 02:24:44.971415 systemd[1]: var-lib-kubelet-pods-32818100\x2d783d\x2d4e9a\x2d8ab2\x2dcad80d846e18-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddb4d7.mount: Deactivated successfully. Mar 11 02:24:44.971623 systemd[1]: var-lib-kubelet-pods-32818100\x2d783d\x2d4e9a\x2d8ab2\x2dcad80d846e18-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 11 02:24:44.990033 systemd-networkd[1248]: cali7cea4491072: Link UP Mar 11 02:24:44.996701 systemd-networkd[1248]: cali7cea4491072: Gained carrier Mar 11 02:24:45.035527 kubelet[2645]: I0311 02:24:45.035421 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:24:45.083382 containerd[1560]: time="2026-03-11T02:24:45.069638907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:45.083382 containerd[1560]: time="2026-03-11T02:24:45.069716211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:45.083382 containerd[1560]: time="2026-03-11T02:24:45.069734585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:45.083382 containerd[1560]: time="2026-03-11T02:24:45.069846263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.653 [ERROR][4170] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.694 [INFO][4170] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0 calico-apiserver-7f6f69c4f8- calico-system a125c27c-9122-4fdb-a210-781209ab1769 928 0 2026-03-11 02:24:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f6f69c4f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f6f69c4f8-rjhrp eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali7cea4491072 [] [] }} ContainerID="e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-rjhrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-" Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.694 [INFO][4170] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-rjhrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.782 [INFO][4195] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" HandleID="k8s-pod-network.e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.833 [INFO][4195] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" HandleID="k8s-pod-network.e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000361a70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7f6f69c4f8-rjhrp", "timestamp":"2026-03-11 02:24:44.782869999 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00042ac60)} Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.833 [INFO][4195] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.833 [INFO][4195] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.833 [INFO][4195] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.842 [INFO][4195] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" host="localhost" Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.866 [INFO][4195] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.901 [INFO][4195] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.904 [INFO][4195] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.910 [INFO][4195] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.910 [INFO][4195] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" host="localhost" Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.914 [INFO][4195] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063 Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.927 [INFO][4195] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" host="localhost" Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.940 [INFO][4195] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" host="localhost" Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.941 [INFO][4195] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" host="localhost" Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.942 [INFO][4195] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:24:45.092385 containerd[1560]: 2026-03-11 02:24:44.943 [INFO][4195] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" HandleID="k8s-pod-network.e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:24:45.095291 containerd[1560]: 2026-03-11 02:24:44.961 [INFO][4170] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-rjhrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0", GenerateName:"calico-apiserver-7f6f69c4f8-", Namespace:"calico-system", SelfLink:"", UID:"a125c27c-9122-4fdb-a210-781209ab1769", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6f69c4f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f6f69c4f8-rjhrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7cea4491072", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:45.095291 containerd[1560]: 2026-03-11 02:24:44.962 [INFO][4170] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-rjhrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:24:45.095291 containerd[1560]: 2026-03-11 02:24:44.962 [INFO][4170] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cea4491072 ContainerID="e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-rjhrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:24:45.095291 containerd[1560]: 2026-03-11 02:24:44.998 [INFO][4170] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-rjhrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:24:45.095291 containerd[1560]: 2026-03-11 02:24:45.020 [INFO][4170] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-rjhrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0", GenerateName:"calico-apiserver-7f6f69c4f8-", Namespace:"calico-system", SelfLink:"", UID:"a125c27c-9122-4fdb-a210-781209ab1769", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6f69c4f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063", Pod:"calico-apiserver-7f6f69c4f8-rjhrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7cea4491072", MAC:"7a:25:80:f2:df:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:45.095291 containerd[1560]: 2026-03-11 02:24:45.048 [INFO][4170] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063" Namespace="calico-system" Pod="calico-apiserver-7f6f69c4f8-rjhrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:24:45.175165 systemd-networkd[1248]: cali7de88ae98f0: Link UP Mar 11 02:24:45.180902 containerd[1560]: time="2026-03-11T02:24:45.180275965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nrfsp,Uid:36343753-ac2b-410a-b28c-082e5d46c12d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696\"" Mar 11 02:24:45.180119 systemd-networkd[1248]: cali7de88ae98f0: Gained carrier Mar 11 02:24:45.182501 kubelet[2645]: E0311 02:24:45.181607 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:45.192410 containerd[1560]: time="2026-03-11T02:24:45.192366065Z" level=info msg="CreateContainer within sandbox \"3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:44.676 [ERROR][4139] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:44.702 [INFO][4139] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0 calico-kube-controllers-54b7f5d88d- calico-system 5cfc5544-cb19-46e9-98ab-95d03c16b97a 929 0 2026-03-11 02:24:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54b7f5d88d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-54b7f5d88d-jg49r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7de88ae98f0 [] [] }} ContainerID="70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" Namespace="calico-system" Pod="calico-kube-controllers-54b7f5d88d-jg49r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-" Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:44.702 [INFO][4139] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" Namespace="calico-system" Pod="calico-kube-controllers-54b7f5d88d-jg49r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:44.863 [INFO][4207] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" HandleID="k8s-pod-network.70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" Workload="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:44.900 [INFO][4207] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" HandleID="k8s-pod-network.70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" Workload="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e3e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-54b7f5d88d-jg49r", "timestamp":"2026-03-11 02:24:44.863271096 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003f18c0)} Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:44.901 [INFO][4207] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:44.941 [INFO][4207] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:44.942 [INFO][4207] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:44.946 [INFO][4207] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" host="localhost" Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:44.961 [INFO][4207] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:45.004 [INFO][4207] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:45.012 [INFO][4207] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:45.017 [INFO][4207] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:45.017 [INFO][4207] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" host="localhost" Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:45.021 [INFO][4207] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748 Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:45.045 [INFO][4207] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" host="localhost" Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:45.062 [INFO][4207] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" host="localhost" Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:45.069 [INFO][4207] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" host="localhost" Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:45.069 [INFO][4207] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:24:45.275392 containerd[1560]: 2026-03-11 02:24:45.069 [INFO][4207] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" HandleID="k8s-pod-network.70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" Workload="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:24:45.279817 containerd[1560]: 2026-03-11 02:24:45.108 [INFO][4139] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" Namespace="calico-system" Pod="calico-kube-controllers-54b7f5d88d-jg49r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0", GenerateName:"calico-kube-controllers-54b7f5d88d-", Namespace:"calico-system", SelfLink:"", UID:"5cfc5544-cb19-46e9-98ab-95d03c16b97a", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b7f5d88d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-54b7f5d88d-jg49r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7de88ae98f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:45.279817 containerd[1560]: 2026-03-11 02:24:45.108 [INFO][4139] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" Namespace="calico-system" Pod="calico-kube-controllers-54b7f5d88d-jg49r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:24:45.279817 containerd[1560]: 2026-03-11 02:24:45.129 [INFO][4139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7de88ae98f0 ContainerID="70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" Namespace="calico-system" Pod="calico-kube-controllers-54b7f5d88d-jg49r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:24:45.279817 containerd[1560]: 2026-03-11 02:24:45.181 [INFO][4139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" Namespace="calico-system" Pod="calico-kube-controllers-54b7f5d88d-jg49r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:24:45.279817 containerd[1560]: 2026-03-11 02:24:45.188 [INFO][4139] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" Namespace="calico-system" Pod="calico-kube-controllers-54b7f5d88d-jg49r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0", GenerateName:"calico-kube-controllers-54b7f5d88d-", Namespace:"calico-system", SelfLink:"", UID:"5cfc5544-cb19-46e9-98ab-95d03c16b97a", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b7f5d88d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748", Pod:"calico-kube-controllers-54b7f5d88d-jg49r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7de88ae98f0", MAC:"be:ec:39:6e:06:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:45.279817 containerd[1560]: 2026-03-11 02:24:45.218 [INFO][4139] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748" Namespace="calico-system" Pod="calico-kube-controllers-54b7f5d88d-jg49r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:24:45.285448 kubelet[2645]: I0311 02:24:45.283504 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ab84ffa0-b845-45a8-a2dc-2ebdfe239270-nginx-config\") pod \"whisker-77cb6cc59c-rplkw\" (UID: \"ab84ffa0-b845-45a8-a2dc-2ebdfe239270\") " pod="calico-system/whisker-77cb6cc59c-rplkw" Mar 11 02:24:45.285448 kubelet[2645]: I0311 02:24:45.283565 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab84ffa0-b845-45a8-a2dc-2ebdfe239270-whisker-ca-bundle\") pod \"whisker-77cb6cc59c-rplkw\" (UID: \"ab84ffa0-b845-45a8-a2dc-2ebdfe239270\") " pod="calico-system/whisker-77cb6cc59c-rplkw" Mar 11 02:24:45.285448 kubelet[2645]: I0311 02:24:45.283599 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk4c6\" (UniqueName: \"kubernetes.io/projected/ab84ffa0-b845-45a8-a2dc-2ebdfe239270-kube-api-access-xk4c6\") pod \"whisker-77cb6cc59c-rplkw\" (UID: \"ab84ffa0-b845-45a8-a2dc-2ebdfe239270\") " pod="calico-system/whisker-77cb6cc59c-rplkw" Mar 11 02:24:45.285448 kubelet[2645]: I0311 02:24:45.283634 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ab84ffa0-b845-45a8-a2dc-2ebdfe239270-whisker-backend-key-pair\") pod \"whisker-77cb6cc59c-rplkw\" (UID: \"ab84ffa0-b845-45a8-a2dc-2ebdfe239270\") " pod="calico-system/whisker-77cb6cc59c-rplkw" Mar 11 02:24:45.293911 systemd-networkd[1248]: cali26edd5cf585: Gained IPv6LL Mar 11 02:24:45.369297 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:24:45.369785 containerd[1560]: time="2026-03-11T02:24:45.369677562Z" level=info msg="CreateContainer within sandbox \"3348a5713206a5ed4142f1e4a2bd63b9a4415e3b10ed667888f81368e32a6696\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e526175acf0c5fa4bf48b37c3fbfe1555ed839863a236a49fa4dfbe3e4adc35a\"" Mar 11 02:24:45.374297 containerd[1560]: time="2026-03-11T02:24:45.373378801Z" level=info msg="StartContainer for \"e526175acf0c5fa4bf48b37c3fbfe1555ed839863a236a49fa4dfbe3e4adc35a\"" Mar 11 02:24:45.405458 containerd[1560]: time="2026-03-11T02:24:45.405393663Z" level=info msg="StartContainer for \"80e5963f061327cb4161a9fa71c45cd8dc089c9ded202c7be135e75ea7a8822d\" returns successfully" Mar 11 02:24:45.427523 containerd[1560]: time="2026-03-11T02:24:45.423850378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:45.427523 containerd[1560]: time="2026-03-11T02:24:45.423917694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:45.427523 containerd[1560]: time="2026-03-11T02:24:45.423963509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:45.427523 containerd[1560]: time="2026-03-11T02:24:45.424162750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:45.519402 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:24:45.524506 containerd[1560]: time="2026-03-11T02:24:45.523629367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77cb6cc59c-rplkw,Uid:ab84ffa0-b845-45a8-a2dc-2ebdfe239270,Namespace:calico-system,Attempt:0,}" Mar 11 02:24:45.574823 containerd[1560]: time="2026-03-11T02:24:45.572646465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:45.574823 containerd[1560]: time="2026-03-11T02:24:45.572731072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:45.574823 containerd[1560]: time="2026-03-11T02:24:45.572819446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:45.574823 containerd[1560]: time="2026-03-11T02:24:45.572958385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:45.592722 containerd[1560]: time="2026-03-11T02:24:45.592670957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-mht4v,Uid:bc195321-fa68-41a6-b9ce-01e15b82c109,Namespace:calico-system,Attempt:0,} returns sandbox id \"a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82\"" Mar 11 02:24:45.630156 containerd[1560]: time="2026-03-11T02:24:45.629056498Z" level=info msg="StartContainer for \"e526175acf0c5fa4bf48b37c3fbfe1555ed839863a236a49fa4dfbe3e4adc35a\" returns successfully" Mar 11 02:24:45.630156 containerd[1560]: time="2026-03-11T02:24:45.629217698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6f69c4f8-jnmsd,Uid:aa5ed09c-658b-4389-902b-dc4b31e7e361,Namespace:calico-system,Attempt:1,} returns sandbox id \"122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c\"" Mar 11 02:24:45.672866 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:24:45.691136 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:24:45.733783 containerd[1560]: time="2026-03-11T02:24:45.733704644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6f69c4f8-rjhrp,Uid:a125c27c-9122-4fdb-a210-781209ab1769,Namespace:calico-system,Attempt:1,} returns sandbox id \"e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063\"" Mar 11 02:24:45.735043 systemd-networkd[1248]: cali6a343b61fcc: Gained IPv6LL Mar 11 02:24:45.736207 systemd-networkd[1248]: cali48bb12270a0: Gained IPv6LL Mar 11 02:24:45.756394 containerd[1560]: time="2026-03-11T02:24:45.756209344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b7f5d88d-jg49r,Uid:5cfc5544-cb19-46e9-98ab-95d03c16b97a,Namespace:calico-system,Attempt:1,} returns sandbox id \"70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748\"" Mar 11 02:24:45.776668 kubelet[2645]: I0311 02:24:45.776553 2645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32818100-783d-4e9a-8ab2-cad80d846e18" path="/var/lib/kubelet/pods/32818100-783d-4e9a-8ab2-cad80d846e18/volumes" Mar 11 02:24:45.868245 systemd-networkd[1248]: cali7f8ec77fb85: Link UP Mar 11 02:24:45.868781 systemd-networkd[1248]: cali7f8ec77fb85: Gained carrier Mar 11 02:24:45.874217 containerd[1560]: time="2026-03-11T02:24:45.873872365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:45.875965 containerd[1560]: time="2026-03-11T02:24:45.875872998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 11 02:24:45.879478 containerd[1560]: time="2026-03-11T02:24:45.879398560Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:45.889425 containerd[1560]: time="2026-03-11T02:24:45.889009660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:45.891168 containerd[1560]: time="2026-03-11T02:24:45.890457130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.647628618s" Mar 11 02:24:45.891168 containerd[1560]: time="2026-03-11T02:24:45.890508286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.698 [ERROR][4591] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.723 [INFO][4591] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--77cb6cc59c--rplkw-eth0 whisker-77cb6cc59c- calico-system ab84ffa0-b845-45a8-a2dc-2ebdfe239270 967 0 2026-03-11 02:24:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77cb6cc59c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-77cb6cc59c-rplkw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7f8ec77fb85 [] [] }} ContainerID="286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" Namespace="calico-system" Pod="whisker-77cb6cc59c-rplkw" WorkloadEndpoint="localhost-k8s-whisker--77cb6cc59c--rplkw-" Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.724 [INFO][4591] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" Namespace="calico-system" Pod="whisker-77cb6cc59c-rplkw" WorkloadEndpoint="localhost-k8s-whisker--77cb6cc59c--rplkw-eth0" Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.786 [INFO][4644] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" HandleID="k8s-pod-network.286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" Workload="localhost-k8s-whisker--77cb6cc59c--rplkw-eth0" Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.802 [INFO][4644] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" HandleID="k8s-pod-network.286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" Workload="localhost-k8s-whisker--77cb6cc59c--rplkw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee0c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-77cb6cc59c-rplkw", "timestamp":"2026-03-11 02:24:45.786906748 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005e71e0)} Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.802 [INFO][4644] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.803 [INFO][4644] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.803 [INFO][4644] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.807 [INFO][4644] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" host="localhost" Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.815 [INFO][4644] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.824 [INFO][4644] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.828 [INFO][4644] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.833 [INFO][4644] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.834 [INFO][4644] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" host="localhost" Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.837 [INFO][4644] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.847 [INFO][4644] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" host="localhost" Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.858 [INFO][4644] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" host="localhost" Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.858 [INFO][4644] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" host="localhost" Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.859 [INFO][4644] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:24:45.891168 containerd[1560]: 2026-03-11 02:24:45.859 [INFO][4644] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" HandleID="k8s-pod-network.286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" Workload="localhost-k8s-whisker--77cb6cc59c--rplkw-eth0" Mar 11 02:24:45.892865 containerd[1560]: 2026-03-11 02:24:45.862 [INFO][4591] cni-plugin/k8s.go 418: Populated endpoint ContainerID="286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" Namespace="calico-system" Pod="whisker-77cb6cc59c-rplkw" WorkloadEndpoint="localhost-k8s-whisker--77cb6cc59c--rplkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77cb6cc59c--rplkw-eth0", GenerateName:"whisker-77cb6cc59c-", Namespace:"calico-system", SelfLink:"", UID:"ab84ffa0-b845-45a8-a2dc-2ebdfe239270", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77cb6cc59c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-77cb6cc59c-rplkw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7f8ec77fb85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:45.892865 containerd[1560]: 2026-03-11 02:24:45.863 [INFO][4591] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" Namespace="calico-system" Pod="whisker-77cb6cc59c-rplkw" WorkloadEndpoint="localhost-k8s-whisker--77cb6cc59c--rplkw-eth0" Mar 11 02:24:45.892865 containerd[1560]: 2026-03-11 02:24:45.863 [INFO][4591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f8ec77fb85 ContainerID="286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" Namespace="calico-system" Pod="whisker-77cb6cc59c-rplkw" WorkloadEndpoint="localhost-k8s-whisker--77cb6cc59c--rplkw-eth0" Mar 11 02:24:45.892865 containerd[1560]: 2026-03-11 02:24:45.870 [INFO][4591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" Namespace="calico-system" Pod="whisker-77cb6cc59c-rplkw" WorkloadEndpoint="localhost-k8s-whisker--77cb6cc59c--rplkw-eth0" Mar 11 02:24:45.892865 containerd[1560]: 2026-03-11 02:24:45.870 [INFO][4591] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" Namespace="calico-system" Pod="whisker-77cb6cc59c-rplkw" WorkloadEndpoint="localhost-k8s-whisker--77cb6cc59c--rplkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77cb6cc59c--rplkw-eth0", GenerateName:"whisker-77cb6cc59c-", Namespace:"calico-system", SelfLink:"", UID:"ab84ffa0-b845-45a8-a2dc-2ebdfe239270", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77cb6cc59c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed", Pod:"whisker-77cb6cc59c-rplkw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7f8ec77fb85", MAC:"02:65:aa:48:50:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:24:45.892865 containerd[1560]: 2026-03-11 02:24:45.884 [INFO][4591] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed" Namespace="calico-system" Pod="whisker-77cb6cc59c-rplkw" WorkloadEndpoint="localhost-k8s-whisker--77cb6cc59c--rplkw-eth0" Mar 11 02:24:45.894863 containerd[1560]: time="2026-03-11T02:24:45.893687543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 11 02:24:45.898125 containerd[1560]: time="2026-03-11T02:24:45.898012415Z" level=info msg="CreateContainer within sandbox \"97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 11 02:24:45.930810 containerd[1560]: time="2026-03-11T02:24:45.930757601Z" level=info msg="CreateContainer within sandbox \"97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2556def1c6b4089995f2f0d8a9452ddefedd424bdd66a2766974c7d506bfcd13\"" Mar 11 02:24:45.935041 containerd[1560]: time="2026-03-11T02:24:45.932985407Z" level=info msg="StartContainer for \"2556def1c6b4089995f2f0d8a9452ddefedd424bdd66a2766974c7d506bfcd13\"" Mar 11 02:24:45.944953 containerd[1560]: time="2026-03-11T02:24:45.943943300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:24:45.944953 containerd[1560]: time="2026-03-11T02:24:45.944116913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:24:45.944953 containerd[1560]: time="2026-03-11T02:24:45.944137502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:45.944953 containerd[1560]: time="2026-03-11T02:24:45.944409138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:24:45.965945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3364657420.mount: Deactivated successfully. Mar 11 02:24:45.991590 systemd-networkd[1248]: cali005e303f3e5: Gained IPv6LL Mar 11 02:24:46.017363 systemd-resolved[1458]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:24:46.058292 kubelet[2645]: E0311 02:24:46.057881 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:46.090219 kubelet[2645]: E0311 02:24:46.089020 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:46.106264 containerd[1560]: time="2026-03-11T02:24:46.106229611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77cb6cc59c-rplkw,Uid:ab84ffa0-b845-45a8-a2dc-2ebdfe239270,Namespace:calico-system,Attempt:0,} returns sandbox id \"286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed\"" Mar 11 02:24:46.121278 containerd[1560]: time="2026-03-11T02:24:46.121049432Z" level=info msg="StartContainer for \"2556def1c6b4089995f2f0d8a9452ddefedd424bdd66a2766974c7d506bfcd13\" returns successfully" Mar 11 02:24:46.122902 kubelet[2645]: I0311 02:24:46.122741 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nrfsp" podStartSLOduration=23.122691497 podStartE2EDuration="23.122691497s" podCreationTimestamp="2026-03-11 02:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:24:46.097773711 +0000 UTC m=+28.514359164" watchObservedRunningTime="2026-03-11 02:24:46.122691497 +0000 UTC m=+28.539276950" Mar 11 02:24:46.162648 kubelet[2645]: I0311 02:24:46.161015 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-m74p4" podStartSLOduration=23.160993014 podStartE2EDuration="23.160993014s" podCreationTimestamp="2026-03-11 02:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:24:46.129716746 +0000 UTC m=+28.546302229" watchObservedRunningTime="2026-03-11 02:24:46.160993014 +0000 UTC m=+28.577578467" Mar 11 02:24:46.439761 systemd-networkd[1248]: cali7de88ae98f0: Gained IPv6LL Mar 11 02:24:46.567614 systemd-networkd[1248]: cali20d4f2e1c64: Gained IPv6LL Mar 11 02:24:46.696554 systemd-networkd[1248]: cali7cea4491072: Gained IPv6LL Mar 11 02:24:47.111912 kubelet[2645]: E0311 02:24:47.111806 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:47.112512 kubelet[2645]: E0311 02:24:47.111959 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:47.142665 systemd-networkd[1248]: cali7f8ec77fb85: Gained IPv6LL Mar 11 02:24:47.186700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630824954.mount: Deactivated successfully. Mar 11 02:24:47.821434 containerd[1560]: time="2026-03-11T02:24:47.821302930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:47.823110 containerd[1560]: time="2026-03-11T02:24:47.822891989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 11 02:24:47.832375 containerd[1560]: time="2026-03-11T02:24:47.830218784Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:47.833711 containerd[1560]: time="2026-03-11T02:24:47.833583043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:47.835689 containerd[1560]: time="2026-03-11T02:24:47.835255486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.941533771s" Mar 11 02:24:47.835689 containerd[1560]: time="2026-03-11T02:24:47.835305610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 11 02:24:47.839364 containerd[1560]: time="2026-03-11T02:24:47.839264350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 11 02:24:47.843276 containerd[1560]: time="2026-03-11T02:24:47.843225593Z" level=info msg="CreateContainer within sandbox \"a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 11 02:24:47.862664 containerd[1560]: time="2026-03-11T02:24:47.862531508Z" level=info msg="CreateContainer within sandbox \"a891bcd16512517da72639ab820fb99b712742648a53dc488d0c9be5e92d6d82\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e0504611250231cbef3a3e4b446b17ba26bb9077d00d57d2ddc53a7cf5949a97\"" Mar 11 02:24:47.867211 containerd[1560]: time="2026-03-11T02:24:47.865700417Z" level=info msg="StartContainer for \"e0504611250231cbef3a3e4b446b17ba26bb9077d00d57d2ddc53a7cf5949a97\"" Mar 11 02:24:47.992504 containerd[1560]: time="2026-03-11T02:24:47.992410832Z" level=info msg="StartContainer for \"e0504611250231cbef3a3e4b446b17ba26bb9077d00d57d2ddc53a7cf5949a97\" returns successfully" Mar 11 02:24:48.122245 kubelet[2645]: E0311 02:24:48.120937 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:48.123989 kubelet[2645]: E0311 02:24:48.123889 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:48.144644 kubelet[2645]: I0311 02:24:48.143752 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-mht4v" podStartSLOduration=12.917391601 podStartE2EDuration="15.14373822s" podCreationTimestamp="2026-03-11 02:24:33 +0000 UTC" firstStartedPulling="2026-03-11 02:24:45.610802391 +0000 UTC m=+28.027387854" lastFinishedPulling="2026-03-11 02:24:47.83714902 +0000 UTC m=+30.253734473" observedRunningTime="2026-03-11 02:24:48.143483545 +0000 UTC m=+30.560069008" watchObservedRunningTime="2026-03-11 02:24:48.14373822 +0000 UTC m=+30.560323673" Mar 11 02:24:49.129009 kubelet[2645]: I0311 02:24:49.126840 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:24:49.344421 containerd[1560]: time="2026-03-11T02:24:49.344260576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:49.345992 containerd[1560]: time="2026-03-11T02:24:49.345938894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 11 02:24:49.352274 containerd[1560]: time="2026-03-11T02:24:49.352201993Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:49.355446 containerd[1560]: time="2026-03-11T02:24:49.355404267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:49.356736 containerd[1560]: time="2026-03-11T02:24:49.356652050Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.517244043s" Mar 11 02:24:49.356736 containerd[1560]: time="2026-03-11T02:24:49.356706282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 11 02:24:49.358281 containerd[1560]: time="2026-03-11T02:24:49.358201106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 11 02:24:49.363908 containerd[1560]: time="2026-03-11T02:24:49.363827991Z" level=info msg="CreateContainer within sandbox \"122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 11 02:24:49.387371 containerd[1560]: time="2026-03-11T02:24:49.387107676Z" level=info msg="CreateContainer within sandbox \"122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6e7219e4761bbb22f12db212665380b91f5e064903cb836ea158a5ac9faa9f6f\"" Mar 11 02:24:49.387909 containerd[1560]: time="2026-03-11T02:24:49.387881317Z" level=info msg="StartContainer for \"6e7219e4761bbb22f12db212665380b91f5e064903cb836ea158a5ac9faa9f6f\"" Mar 11 02:24:49.472033 containerd[1560]: time="2026-03-11T02:24:49.471148068Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:49.473569 containerd[1560]: time="2026-03-11T02:24:49.473489136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 11 02:24:49.477692 containerd[1560]: time="2026-03-11T02:24:49.477593452Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 119.358454ms" Mar 11 02:24:49.477692 containerd[1560]: time="2026-03-11T02:24:49.477662862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 11 02:24:49.482551 containerd[1560]: time="2026-03-11T02:24:49.482461839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 11 02:24:49.487665 containerd[1560]: time="2026-03-11T02:24:49.487582106Z" level=info msg="CreateContainer within sandbox \"e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 11 02:24:49.506686 containerd[1560]: time="2026-03-11T02:24:49.506579150Z" level=info msg="StartContainer for \"6e7219e4761bbb22f12db212665380b91f5e064903cb836ea158a5ac9faa9f6f\" returns successfully" Mar 11 02:24:49.517274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4272798792.mount: Deactivated successfully. Mar 11 02:24:49.525057 containerd[1560]: time="2026-03-11T02:24:49.525012609Z" level=info msg="CreateContainer within sandbox \"e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d3a130cefb6fa6dc0cccc987a88c3f68bdb47ed1e8fcdbef6270d286507403c7\"" Mar 11 02:24:49.528150 containerd[1560]: time="2026-03-11T02:24:49.527939329Z" level=info msg="StartContainer for \"d3a130cefb6fa6dc0cccc987a88c3f68bdb47ed1e8fcdbef6270d286507403c7\"" Mar 11 02:24:49.666729 containerd[1560]: time="2026-03-11T02:24:49.666642720Z" level=info msg="StartContainer for \"d3a130cefb6fa6dc0cccc987a88c3f68bdb47ed1e8fcdbef6270d286507403c7\" returns successfully" Mar 11 02:24:50.197123 kubelet[2645]: I0311 02:24:50.195865 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7f6f69c4f8-rjhrp" podStartSLOduration=14.454939676 podStartE2EDuration="18.195847818s" podCreationTimestamp="2026-03-11 02:24:32 +0000 UTC" firstStartedPulling="2026-03-11 02:24:45.738227733 +0000 UTC m=+28.154813186" lastFinishedPulling="2026-03-11 02:24:49.479135875 +0000 UTC m=+31.895721328" observedRunningTime="2026-03-11 02:24:50.173013804 +0000 UTC m=+32.589599257" watchObservedRunningTime="2026-03-11 02:24:50.195847818 +0000 UTC m=+32.612433271" Mar 11 02:24:51.176364 kubelet[2645]: I0311 02:24:51.175610 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:24:51.176364 kubelet[2645]: I0311 02:24:51.176028 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:24:51.724958 containerd[1560]: time="2026-03-11T02:24:51.724544606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:51.733445 containerd[1560]: time="2026-03-11T02:24:51.733351185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 11 02:24:51.735410 containerd[1560]: time="2026-03-11T02:24:51.735279544Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:51.741909 containerd[1560]: time="2026-03-11T02:24:51.741814898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:51.743864 containerd[1560]: time="2026-03-11T02:24:51.743608754Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.26098855s" Mar 11 02:24:51.743864 containerd[1560]: time="2026-03-11T02:24:51.743653497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 11 02:24:51.748655 containerd[1560]: time="2026-03-11T02:24:51.748601460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 11 02:24:51.792848 containerd[1560]: time="2026-03-11T02:24:51.792732831Z" level=info msg="CreateContainer within sandbox \"70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 11 02:24:51.813564 containerd[1560]: time="2026-03-11T02:24:51.813450069Z" level=info msg="CreateContainer within sandbox \"70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e55a9b01ee2ae657b6c72b648bc4d30b6346d2ab830c6cb6cf4c01b8bf725966\"" Mar 11 02:24:51.816589 containerd[1560]: time="2026-03-11T02:24:51.816493578Z" level=info msg="StartContainer for \"e55a9b01ee2ae657b6c72b648bc4d30b6346d2ab830c6cb6cf4c01b8bf725966\"" Mar 11 02:24:51.971659 containerd[1560]: time="2026-03-11T02:24:51.971544955Z" level=info msg="StartContainer for \"e55a9b01ee2ae657b6c72b648bc4d30b6346d2ab830c6cb6cf4c01b8bf725966\" returns successfully" Mar 11 02:24:52.219410 kubelet[2645]: I0311 02:24:52.219062 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7f6f69c4f8-jnmsd" podStartSLOduration=16.496164347 podStartE2EDuration="20.218953056s" podCreationTimestamp="2026-03-11 02:24:32 +0000 UTC" firstStartedPulling="2026-03-11 02:24:45.635243111 +0000 UTC m=+28.051828575" lastFinishedPulling="2026-03-11 02:24:49.358031821 +0000 UTC m=+31.774617284" observedRunningTime="2026-03-11 02:24:50.197069393 +0000 UTC m=+32.613654846" watchObservedRunningTime="2026-03-11 02:24:52.218953056 +0000 UTC m=+34.635538509" Mar 11 02:24:52.226758 kubelet[2645]: I0311 02:24:52.221751 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54b7f5d88d-jg49r" podStartSLOduration=14.233131917 podStartE2EDuration="20.221652314s" podCreationTimestamp="2026-03-11 02:24:32 +0000 UTC" firstStartedPulling="2026-03-11 02:24:45.757976968 +0000 UTC m=+28.174562421" lastFinishedPulling="2026-03-11 02:24:51.746497345 +0000 UTC m=+34.163082818" observedRunningTime="2026-03-11 02:24:52.215600019 +0000 UTC m=+34.632185482" watchObservedRunningTime="2026-03-11 02:24:52.221652314 +0000 UTC m=+34.638237767" Mar 11 02:24:52.499004 containerd[1560]: time="2026-03-11T02:24:52.498839117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:52.504188 containerd[1560]: time="2026-03-11T02:24:52.503992533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 11 02:24:52.511881 containerd[1560]: time="2026-03-11T02:24:52.511771672Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:52.518761 containerd[1560]: time="2026-03-11T02:24:52.518490008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:52.534180 containerd[1560]: time="2026-03-11T02:24:52.529718237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 781.049681ms" Mar 11 02:24:52.534180 containerd[1560]: time="2026-03-11T02:24:52.529777527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 11 02:24:52.549865 containerd[1560]: time="2026-03-11T02:24:52.549623832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 11 02:24:52.627852 containerd[1560]: time="2026-03-11T02:24:52.627582123Z" level=info msg="CreateContainer within sandbox \"286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 11 02:24:52.720040 containerd[1560]: time="2026-03-11T02:24:52.719952154Z" level=info msg="CreateContainer within sandbox \"286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a6a7470a5f795962720b0b7346634a73cda4ac9b6e9a554f7d9aa169acb2f095\"" Mar 11 02:24:52.720852 containerd[1560]: time="2026-03-11T02:24:52.720782714Z" level=info msg="StartContainer for \"a6a7470a5f795962720b0b7346634a73cda4ac9b6e9a554f7d9aa169acb2f095\"" Mar 11 02:24:52.867888 containerd[1560]: time="2026-03-11T02:24:52.867839197Z" level=info msg="StartContainer for \"a6a7470a5f795962720b0b7346634a73cda4ac9b6e9a554f7d9aa169acb2f095\" returns successfully" Mar 11 02:24:53.195991 kubelet[2645]: I0311 02:24:53.195837 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:24:53.516930 containerd[1560]: time="2026-03-11T02:24:53.516874624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:53.518016 containerd[1560]: time="2026-03-11T02:24:53.517924421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 11 02:24:53.519422 containerd[1560]: time="2026-03-11T02:24:53.519303351Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:53.522457 containerd[1560]: time="2026-03-11T02:24:53.522298431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:53.523789 containerd[1560]: time="2026-03-11T02:24:53.523658224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 973.966705ms" Mar 11 02:24:53.523849 containerd[1560]: time="2026-03-11T02:24:53.523799978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 11 02:24:53.527153 containerd[1560]: time="2026-03-11T02:24:53.526712215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 11 02:24:53.531586 containerd[1560]: time="2026-03-11T02:24:53.531461025Z" level=info msg="CreateContainer within sandbox \"97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 11 02:24:53.554934 containerd[1560]: time="2026-03-11T02:24:53.554870776Z" level=info msg="CreateContainer within sandbox \"97fae4c7aabc458bc4aa34532334fde86ee92f90855040d6b8233ad3eeec387c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d6b38c3ff5068e89b6595630b817e54d929a8af585b94dad26ce282c5f1bdd23\"" Mar 11 02:24:53.555749 containerd[1560]: time="2026-03-11T02:24:53.555669853Z" level=info msg="StartContainer for \"d6b38c3ff5068e89b6595630b817e54d929a8af585b94dad26ce282c5f1bdd23\"" Mar 11 02:24:53.699427 containerd[1560]: time="2026-03-11T02:24:53.699277246Z" level=info msg="StartContainer for \"d6b38c3ff5068e89b6595630b817e54d929a8af585b94dad26ce282c5f1bdd23\" returns successfully" Mar 11 02:24:53.861551 kubelet[2645]: I0311 02:24:53.861180 2645 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 11 02:24:53.862672 kubelet[2645]: I0311 02:24:53.862628 2645 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 11 02:24:54.232237 kubelet[2645]: I0311 02:24:54.232025 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fpgpj" podStartSLOduration=12.946449936 podStartE2EDuration="22.232004802s" podCreationTimestamp="2026-03-11 02:24:32 +0000 UTC" firstStartedPulling="2026-03-11 02:24:44.240228483 +0000 UTC m=+26.656813936" lastFinishedPulling="2026-03-11 02:24:53.525783348 +0000 UTC m=+35.942368802" observedRunningTime="2026-03-11 02:24:54.226869463 +0000 UTC m=+36.643454936" watchObservedRunningTime="2026-03-11 02:24:54.232004802 +0000 UTC m=+36.648590256" Mar 11 02:24:54.750223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2632491203.mount: Deactivated successfully. Mar 11 02:24:54.807299 containerd[1560]: time="2026-03-11T02:24:54.807147623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:54.808931 containerd[1560]: time="2026-03-11T02:24:54.808784718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 11 02:24:54.819160 containerd[1560]: time="2026-03-11T02:24:54.816968388Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:54.821883 containerd[1560]: time="2026-03-11T02:24:54.821245423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:24:54.822963 containerd[1560]: time="2026-03-11T02:24:54.822854768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.296100284s" Mar 11 02:24:54.822963 containerd[1560]: time="2026-03-11T02:24:54.822943241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 11 02:24:54.830933 containerd[1560]: time="2026-03-11T02:24:54.830839346Z" level=info msg="CreateContainer within sandbox \"286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 11 02:24:54.855777 containerd[1560]: time="2026-03-11T02:24:54.855710310Z" level=info msg="CreateContainer within sandbox \"286c0b028edfa2b79851ad3a8477399b3048f0e37ddbe24e32643fd1e53a83ed\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"240600d6e0eef3f87c27c81753993ff868f8751ddd7550b3fe9855e5aa4a0692\"" Mar 11 02:24:54.857251 containerd[1560]: time="2026-03-11T02:24:54.857149283Z" level=info msg="StartContainer for \"240600d6e0eef3f87c27c81753993ff868f8751ddd7550b3fe9855e5aa4a0692\"" Mar 11 02:24:55.176081 kubelet[2645]: I0311 02:24:55.175958 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:24:55.208973 containerd[1560]: time="2026-03-11T02:24:55.206068523Z" level=info msg="StartContainer for \"240600d6e0eef3f87c27c81753993ff868f8751ddd7550b3fe9855e5aa4a0692\" returns successfully" Mar 11 02:24:55.250754 kubelet[2645]: I0311 02:24:55.250656 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-77cb6cc59c-rplkw" podStartSLOduration=1.535375332 podStartE2EDuration="10.250628728s" podCreationTimestamp="2026-03-11 02:24:45 +0000 UTC" firstStartedPulling="2026-03-11 02:24:46.109589957 +0000 UTC m=+28.526175420" lastFinishedPulling="2026-03-11 02:24:54.824843353 +0000 UTC m=+37.241428816" observedRunningTime="2026-03-11 02:24:55.24941711 +0000 UTC m=+37.666002563" watchObservedRunningTime="2026-03-11 02:24:55.250628728 +0000 UTC m=+37.667214181" Mar 11 02:24:55.388076 systemd[1]: run-containerd-runc-k8s.io-e55a9b01ee2ae657b6c72b648bc4d30b6346d2ab830c6cb6cf4c01b8bf725966-runc.5HL4rf.mount: Deactivated successfully. Mar 11 02:24:55.878578 kubelet[2645]: I0311 02:24:55.878490 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:24:56.987886 kubelet[2645]: I0311 02:24:56.987809 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:24:57.507216 kubelet[2645]: I0311 02:24:57.507150 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:24:57.510026 kubelet[2645]: E0311 02:24:57.507672 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:58.223061 kubelet[2645]: E0311 02:24:58.222837 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:24:58.239406 kernel: calico-node[5456]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 11 02:24:58.980528 systemd-networkd[1248]: vxlan.calico: Link UP Mar 11 02:24:58.980542 systemd-networkd[1248]: vxlan.calico: Gained carrier Mar 11 02:25:00.198707 systemd-networkd[1248]: vxlan.calico: Gained IPv6LL Mar 11 02:25:05.295538 systemd[1]: run-containerd-runc-k8s.io-e55a9b01ee2ae657b6c72b648bc4d30b6346d2ab830c6cb6cf4c01b8bf725966-runc.PJA6SN.mount: Deactivated successfully. Mar 11 02:25:06.417057 kubelet[2645]: I0311 02:25:06.416980 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:25:08.454657 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:42012.service - OpenSSH per-connection server daemon (10.0.0.1:42012). Mar 11 02:25:08.524501 sshd[5623]: Accepted publickey for core from 10.0.0.1 port 42012 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:08.527762 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:08.536426 systemd-logind[1545]: New session 8 of user core. Mar 11 02:25:08.553791 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 11 02:25:09.027850 sshd[5623]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:09.032486 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:42012.service: Deactivated successfully. Mar 11 02:25:09.035280 systemd[1]: session-8.scope: Deactivated successfully. Mar 11 02:25:09.035284 systemd-logind[1545]: Session 8 logged out. Waiting for processes to exit. Mar 11 02:25:09.037096 systemd-logind[1545]: Removed session 8. Mar 11 02:25:12.201853 kubelet[2645]: I0311 02:25:12.201719 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:25:14.044685 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:51270.service - OpenSSH per-connection server daemon (10.0.0.1:51270). Mar 11 02:25:14.075453 sshd[5696]: Accepted publickey for core from 10.0.0.1 port 51270 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:14.077003 sshd[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:14.082770 systemd-logind[1545]: New session 9 of user core. Mar 11 02:25:14.097759 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 11 02:25:14.236508 sshd[5696]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:14.241547 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:51270.service: Deactivated successfully. Mar 11 02:25:14.244524 systemd[1]: session-9.scope: Deactivated successfully. Mar 11 02:25:14.244617 systemd-logind[1545]: Session 9 logged out. Waiting for processes to exit. Mar 11 02:25:14.246653 systemd-logind[1545]: Removed session 9. Mar 11 02:25:15.732035 kernel: hrtimer: interrupt took 2664402 ns Mar 11 02:25:17.748792 containerd[1560]: time="2026-03-11T02:25:17.748528444Z" level=info msg="StopPodSandbox for \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\"" Mar 11 02:25:17.984665 containerd[1560]: 2026-03-11 02:25:17.837 [WARNING][5722] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0", GenerateName:"calico-kube-controllers-54b7f5d88d-", Namespace:"calico-system", SelfLink:"", UID:"5cfc5544-cb19-46e9-98ab-95d03c16b97a", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b7f5d88d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748", Pod:"calico-kube-controllers-54b7f5d88d-jg49r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7de88ae98f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:17.984665 containerd[1560]: 2026-03-11 02:25:17.840 [INFO][5722] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Mar 11 02:25:17.984665 containerd[1560]: 2026-03-11 02:25:17.840 [INFO][5722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" iface="eth0" netns="" Mar 11 02:25:17.984665 containerd[1560]: 2026-03-11 02:25:17.840 [INFO][5722] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Mar 11 02:25:17.984665 containerd[1560]: 2026-03-11 02:25:17.840 [INFO][5722] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Mar 11 02:25:17.984665 containerd[1560]: 2026-03-11 02:25:17.963 [INFO][5732] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" HandleID="k8s-pod-network.839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Workload="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:25:17.984665 containerd[1560]: 2026-03-11 02:25:17.963 [INFO][5732] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:17.984665 containerd[1560]: 2026-03-11 02:25:17.964 [INFO][5732] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:17.984665 containerd[1560]: 2026-03-11 02:25:17.973 [WARNING][5732] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" HandleID="k8s-pod-network.839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Workload="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:25:17.984665 containerd[1560]: 2026-03-11 02:25:17.974 [INFO][5732] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" HandleID="k8s-pod-network.839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Workload="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:25:17.984665 containerd[1560]: 2026-03-11 02:25:17.977 [INFO][5732] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:17.984665 containerd[1560]: 2026-03-11 02:25:17.980 [INFO][5722] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Mar 11 02:25:18.000095 containerd[1560]: time="2026-03-11T02:25:17.999980555Z" level=info msg="TearDown network for sandbox \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\" successfully" Mar 11 02:25:18.000095 containerd[1560]: time="2026-03-11T02:25:18.000080632Z" level=info msg="StopPodSandbox for \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\" returns successfully" Mar 11 02:25:18.005280 containerd[1560]: time="2026-03-11T02:25:18.005193847Z" level=info msg="RemovePodSandbox for \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\"" Mar 11 02:25:18.007428 containerd[1560]: time="2026-03-11T02:25:18.007380888Z" level=info msg="Forcibly stopping sandbox \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\"" Mar 11 02:25:18.137611 containerd[1560]: 2026-03-11 02:25:18.073 [WARNING][5749] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0", GenerateName:"calico-kube-controllers-54b7f5d88d-", Namespace:"calico-system", SelfLink:"", UID:"5cfc5544-cb19-46e9-98ab-95d03c16b97a", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b7f5d88d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70232b4af2b82f177469b7f3eebcce210554a144d91e5784bdd518b2e0388748", Pod:"calico-kube-controllers-54b7f5d88d-jg49r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7de88ae98f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:18.137611 containerd[1560]: 2026-03-11 02:25:18.073 [INFO][5749] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Mar 11 02:25:18.137611 containerd[1560]: 2026-03-11 02:25:18.073 [INFO][5749] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" iface="eth0" netns="" Mar 11 02:25:18.137611 containerd[1560]: 2026-03-11 02:25:18.074 [INFO][5749] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Mar 11 02:25:18.137611 containerd[1560]: 2026-03-11 02:25:18.074 [INFO][5749] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Mar 11 02:25:18.137611 containerd[1560]: 2026-03-11 02:25:18.111 [INFO][5757] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" HandleID="k8s-pod-network.839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Workload="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:25:18.137611 containerd[1560]: 2026-03-11 02:25:18.111 [INFO][5757] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:18.137611 containerd[1560]: 2026-03-11 02:25:18.111 [INFO][5757] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:18.137611 containerd[1560]: 2026-03-11 02:25:18.127 [WARNING][5757] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" HandleID="k8s-pod-network.839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Workload="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:25:18.137611 containerd[1560]: 2026-03-11 02:25:18.127 [INFO][5757] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" HandleID="k8s-pod-network.839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Workload="localhost-k8s-calico--kube--controllers--54b7f5d88d--jg49r-eth0" Mar 11 02:25:18.137611 containerd[1560]: 2026-03-11 02:25:18.130 [INFO][5757] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:18.137611 containerd[1560]: 2026-03-11 02:25:18.133 [INFO][5749] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5" Mar 11 02:25:18.138075 containerd[1560]: time="2026-03-11T02:25:18.137659237Z" level=info msg="TearDown network for sandbox \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\" successfully" Mar 11 02:25:18.150274 containerd[1560]: time="2026-03-11T02:25:18.150160267Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:25:18.150594 containerd[1560]: time="2026-03-11T02:25:18.150432957Z" level=info msg="RemovePodSandbox \"839fcdfad110871483910cdc9df019dbe70ff975c2c430a06420acbcad4d86c5\" returns successfully" Mar 11 02:25:18.162947 containerd[1560]: time="2026-03-11T02:25:18.162888317Z" level=info msg="StopPodSandbox for \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\"" Mar 11 02:25:18.282139 containerd[1560]: 2026-03-11 02:25:18.217 [WARNING][5774] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" WorkloadEndpoint="localhost-k8s-whisker--75c5945784--vvc5c-eth0" Mar 11 02:25:18.282139 containerd[1560]: 2026-03-11 02:25:18.217 [INFO][5774] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Mar 11 02:25:18.282139 containerd[1560]: 2026-03-11 02:25:18.217 [INFO][5774] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" iface="eth0" netns="" Mar 11 02:25:18.282139 containerd[1560]: 2026-03-11 02:25:18.217 [INFO][5774] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Mar 11 02:25:18.282139 containerd[1560]: 2026-03-11 02:25:18.217 [INFO][5774] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Mar 11 02:25:18.282139 containerd[1560]: 2026-03-11 02:25:18.258 [INFO][5783] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" HandleID="k8s-pod-network.b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Workload="localhost-k8s-whisker--75c5945784--vvc5c-eth0" Mar 11 02:25:18.282139 containerd[1560]: 2026-03-11 02:25:18.258 [INFO][5783] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:18.282139 containerd[1560]: 2026-03-11 02:25:18.258 [INFO][5783] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:18.282139 containerd[1560]: 2026-03-11 02:25:18.272 [WARNING][5783] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" HandleID="k8s-pod-network.b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Workload="localhost-k8s-whisker--75c5945784--vvc5c-eth0" Mar 11 02:25:18.282139 containerd[1560]: 2026-03-11 02:25:18.272 [INFO][5783] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" HandleID="k8s-pod-network.b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Workload="localhost-k8s-whisker--75c5945784--vvc5c-eth0" Mar 11 02:25:18.282139 containerd[1560]: 2026-03-11 02:25:18.275 [INFO][5783] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:18.282139 containerd[1560]: 2026-03-11 02:25:18.279 [INFO][5774] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Mar 11 02:25:18.282139 containerd[1560]: time="2026-03-11T02:25:18.281781055Z" level=info msg="TearDown network for sandbox \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\" successfully" Mar 11 02:25:18.282139 containerd[1560]: time="2026-03-11T02:25:18.281812443Z" level=info msg="StopPodSandbox for \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\" returns successfully" Mar 11 02:25:18.282986 containerd[1560]: time="2026-03-11T02:25:18.282687285Z" level=info msg="RemovePodSandbox for \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\"" Mar 11 02:25:18.282986 containerd[1560]: time="2026-03-11T02:25:18.282722401Z" level=info msg="Forcibly stopping sandbox \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\"" Mar 11 02:25:18.387187 containerd[1560]: 2026-03-11 02:25:18.338 [WARNING][5801] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" WorkloadEndpoint="localhost-k8s-whisker--75c5945784--vvc5c-eth0" Mar 11 02:25:18.387187 containerd[1560]: 2026-03-11 02:25:18.338 [INFO][5801] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Mar 11 02:25:18.387187 containerd[1560]: 2026-03-11 02:25:18.338 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" iface="eth0" netns="" Mar 11 02:25:18.387187 containerd[1560]: 2026-03-11 02:25:18.338 [INFO][5801] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Mar 11 02:25:18.387187 containerd[1560]: 2026-03-11 02:25:18.338 [INFO][5801] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Mar 11 02:25:18.387187 containerd[1560]: 2026-03-11 02:25:18.368 [INFO][5810] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" HandleID="k8s-pod-network.b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Workload="localhost-k8s-whisker--75c5945784--vvc5c-eth0" Mar 11 02:25:18.387187 containerd[1560]: 2026-03-11 02:25:18.368 [INFO][5810] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:18.387187 containerd[1560]: 2026-03-11 02:25:18.369 [INFO][5810] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:18.387187 containerd[1560]: 2026-03-11 02:25:18.379 [WARNING][5810] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" HandleID="k8s-pod-network.b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Workload="localhost-k8s-whisker--75c5945784--vvc5c-eth0" Mar 11 02:25:18.387187 containerd[1560]: 2026-03-11 02:25:18.379 [INFO][5810] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" HandleID="k8s-pod-network.b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Workload="localhost-k8s-whisker--75c5945784--vvc5c-eth0" Mar 11 02:25:18.387187 containerd[1560]: 2026-03-11 02:25:18.381 [INFO][5810] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:18.387187 containerd[1560]: 2026-03-11 02:25:18.383 [INFO][5801] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049" Mar 11 02:25:18.387187 containerd[1560]: time="2026-03-11T02:25:18.386286625Z" level=info msg="TearDown network for sandbox \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\" successfully" Mar 11 02:25:18.395768 containerd[1560]: time="2026-03-11T02:25:18.395711716Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:25:18.395840 containerd[1560]: time="2026-03-11T02:25:18.395792056Z" level=info msg="RemovePodSandbox \"b5f5965ec9af727e817b44f1f16fd431c8de950dc13565a45465818f87b98049\" returns successfully" Mar 11 02:25:18.396370 containerd[1560]: time="2026-03-11T02:25:18.396291943Z" level=info msg="StopPodSandbox for \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\"" Mar 11 02:25:18.497682 containerd[1560]: 2026-03-11 02:25:18.450 [WARNING][5828] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0", GenerateName:"calico-apiserver-7f6f69c4f8-", Namespace:"calico-system", SelfLink:"", UID:"aa5ed09c-658b-4389-902b-dc4b31e7e361", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6f69c4f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c", Pod:"calico-apiserver-7f6f69c4f8-jnmsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali005e303f3e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:18.497682 containerd[1560]: 2026-03-11 02:25:18.450 [INFO][5828] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Mar 11 02:25:18.497682 containerd[1560]: 2026-03-11 02:25:18.450 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" iface="eth0" netns="" Mar 11 02:25:18.497682 containerd[1560]: 2026-03-11 02:25:18.450 [INFO][5828] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Mar 11 02:25:18.497682 containerd[1560]: 2026-03-11 02:25:18.450 [INFO][5828] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Mar 11 02:25:18.497682 containerd[1560]: 2026-03-11 02:25:18.479 [INFO][5836] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" HandleID="k8s-pod-network.fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:25:18.497682 containerd[1560]: 2026-03-11 02:25:18.479 [INFO][5836] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:18.497682 containerd[1560]: 2026-03-11 02:25:18.479 [INFO][5836] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:18.497682 containerd[1560]: 2026-03-11 02:25:18.488 [WARNING][5836] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" HandleID="k8s-pod-network.fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:25:18.497682 containerd[1560]: 2026-03-11 02:25:18.488 [INFO][5836] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" HandleID="k8s-pod-network.fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:25:18.497682 containerd[1560]: 2026-03-11 02:25:18.490 [INFO][5836] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:18.497682 containerd[1560]: 2026-03-11 02:25:18.494 [INFO][5828] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Mar 11 02:25:18.498400 containerd[1560]: time="2026-03-11T02:25:18.497716444Z" level=info msg="TearDown network for sandbox \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\" successfully" Mar 11 02:25:18.498400 containerd[1560]: time="2026-03-11T02:25:18.497743926Z" level=info msg="StopPodSandbox for \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\" returns successfully" Mar 11 02:25:18.498489 containerd[1560]: time="2026-03-11T02:25:18.498432097Z" level=info msg="RemovePodSandbox for \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\"" Mar 11 02:25:18.498489 containerd[1560]: time="2026-03-11T02:25:18.498469837Z" level=info msg="Forcibly stopping sandbox \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\"" Mar 11 02:25:18.602437 containerd[1560]: 2026-03-11 02:25:18.551 [WARNING][5854] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0", GenerateName:"calico-apiserver-7f6f69c4f8-", Namespace:"calico-system", SelfLink:"", UID:"aa5ed09c-658b-4389-902b-dc4b31e7e361", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6f69c4f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"122afe1edb2e70455fcbbdac8601ca20dd11054c24880c2c2881cb4e44e8c18c", Pod:"calico-apiserver-7f6f69c4f8-jnmsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali005e303f3e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:18.602437 containerd[1560]: 2026-03-11 02:25:18.551 [INFO][5854] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Mar 11 02:25:18.602437 containerd[1560]: 2026-03-11 02:25:18.551 [INFO][5854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" iface="eth0" netns="" Mar 11 02:25:18.602437 containerd[1560]: 2026-03-11 02:25:18.552 [INFO][5854] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Mar 11 02:25:18.602437 containerd[1560]: 2026-03-11 02:25:18.552 [INFO][5854] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Mar 11 02:25:18.602437 containerd[1560]: 2026-03-11 02:25:18.585 [INFO][5862] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" HandleID="k8s-pod-network.fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:25:18.602437 containerd[1560]: 2026-03-11 02:25:18.585 [INFO][5862] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:18.602437 containerd[1560]: 2026-03-11 02:25:18.586 [INFO][5862] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:18.602437 containerd[1560]: 2026-03-11 02:25:18.593 [WARNING][5862] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" HandleID="k8s-pod-network.fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:25:18.602437 containerd[1560]: 2026-03-11 02:25:18.593 [INFO][5862] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" HandleID="k8s-pod-network.fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--jnmsd-eth0" Mar 11 02:25:18.602437 containerd[1560]: 2026-03-11 02:25:18.596 [INFO][5862] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:18.602437 containerd[1560]: 2026-03-11 02:25:18.599 [INFO][5854] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11" Mar 11 02:25:18.602437 containerd[1560]: time="2026-03-11T02:25:18.602183026Z" level=info msg="TearDown network for sandbox \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\" successfully" Mar 11 02:25:18.608884 containerd[1560]: time="2026-03-11T02:25:18.608736086Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:25:18.608884 containerd[1560]: time="2026-03-11T02:25:18.608874293Z" level=info msg="RemovePodSandbox \"fad13240f312e319ccfa1b4f87abed74a707ec85c630be810aa7f04322bedf11\" returns successfully" Mar 11 02:25:18.609684 containerd[1560]: time="2026-03-11T02:25:18.609629944Z" level=info msg="StopPodSandbox for \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\"" Mar 11 02:25:18.719546 containerd[1560]: 2026-03-11 02:25:18.668 [WARNING][5879] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0", GenerateName:"calico-apiserver-7f6f69c4f8-", Namespace:"calico-system", SelfLink:"", UID:"a125c27c-9122-4fdb-a210-781209ab1769", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6f69c4f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063", Pod:"calico-apiserver-7f6f69c4f8-rjhrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7cea4491072", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:18.719546 containerd[1560]: 2026-03-11 02:25:18.669 [INFO][5879] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Mar 11 02:25:18.719546 containerd[1560]: 2026-03-11 02:25:18.669 [INFO][5879] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" iface="eth0" netns="" Mar 11 02:25:18.719546 containerd[1560]: 2026-03-11 02:25:18.669 [INFO][5879] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Mar 11 02:25:18.719546 containerd[1560]: 2026-03-11 02:25:18.669 [INFO][5879] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Mar 11 02:25:18.719546 containerd[1560]: 2026-03-11 02:25:18.698 [INFO][5887] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" HandleID="k8s-pod-network.a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:25:18.719546 containerd[1560]: 2026-03-11 02:25:18.698 [INFO][5887] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:18.719546 containerd[1560]: 2026-03-11 02:25:18.698 [INFO][5887] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:18.719546 containerd[1560]: 2026-03-11 02:25:18.707 [WARNING][5887] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" HandleID="k8s-pod-network.a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:25:18.719546 containerd[1560]: 2026-03-11 02:25:18.708 [INFO][5887] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" HandleID="k8s-pod-network.a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:25:18.719546 containerd[1560]: 2026-03-11 02:25:18.711 [INFO][5887] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:18.719546 containerd[1560]: 2026-03-11 02:25:18.715 [INFO][5879] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Mar 11 02:25:18.719546 containerd[1560]: time="2026-03-11T02:25:18.719520131Z" level=info msg="TearDown network for sandbox \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\" successfully" Mar 11 02:25:18.720205 containerd[1560]: time="2026-03-11T02:25:18.719560466Z" level=info msg="StopPodSandbox for \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\" returns successfully" Mar 11 02:25:18.720729 containerd[1560]: time="2026-03-11T02:25:18.720563924Z" level=info msg="RemovePodSandbox for \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\"" Mar 11 02:25:18.720729 containerd[1560]: time="2026-03-11T02:25:18.720724704Z" level=info msg="Forcibly stopping sandbox \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\"" Mar 11 02:25:18.819950 containerd[1560]: 2026-03-11 02:25:18.773 [WARNING][5905] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0", GenerateName:"calico-apiserver-7f6f69c4f8-", Namespace:"calico-system", SelfLink:"", UID:"a125c27c-9122-4fdb-a210-781209ab1769", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 24, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6f69c4f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e493704f63c866707ac7bf928162d17dd0d4c92488a2812cc6ee8c3392ec2063", Pod:"calico-apiserver-7f6f69c4f8-rjhrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7cea4491072", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:25:18.819950 containerd[1560]: 2026-03-11 02:25:18.773 [INFO][5905] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Mar 11 02:25:18.819950 containerd[1560]: 2026-03-11 02:25:18.773 [INFO][5905] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" iface="eth0" netns="" Mar 11 02:25:18.819950 containerd[1560]: 2026-03-11 02:25:18.773 [INFO][5905] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Mar 11 02:25:18.819950 containerd[1560]: 2026-03-11 02:25:18.773 [INFO][5905] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Mar 11 02:25:18.819950 containerd[1560]: 2026-03-11 02:25:18.804 [INFO][5913] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" HandleID="k8s-pod-network.a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:25:18.819950 containerd[1560]: 2026-03-11 02:25:18.804 [INFO][5913] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:25:18.819950 containerd[1560]: 2026-03-11 02:25:18.804 [INFO][5913] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:25:18.819950 containerd[1560]: 2026-03-11 02:25:18.812 [WARNING][5913] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" HandleID="k8s-pod-network.a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:25:18.819950 containerd[1560]: 2026-03-11 02:25:18.812 [INFO][5913] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" HandleID="k8s-pod-network.a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Workload="localhost-k8s-calico--apiserver--7f6f69c4f8--rjhrp-eth0" Mar 11 02:25:18.819950 containerd[1560]: 2026-03-11 02:25:18.814 [INFO][5913] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:25:18.819950 containerd[1560]: 2026-03-11 02:25:18.817 [INFO][5905] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103" Mar 11 02:25:18.821054 containerd[1560]: time="2026-03-11T02:25:18.819978211Z" level=info msg="TearDown network for sandbox \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\" successfully" Mar 11 02:25:18.826210 containerd[1560]: time="2026-03-11T02:25:18.826096765Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:25:18.826210 containerd[1560]: time="2026-03-11T02:25:18.826196982Z" level=info msg="RemovePodSandbox \"a670c6dad354fc88271d6488a6afd386dbd914482c8eb8e5b6930ef935e62103\" returns successfully" Mar 11 02:25:19.259780 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:53252.service - OpenSSH per-connection server daemon (10.0.0.1:53252). Mar 11 02:25:19.291061 sshd[5921]: Accepted publickey for core from 10.0.0.1 port 53252 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:19.293425 sshd[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:19.300414 systemd-logind[1545]: New session 10 of user core. Mar 11 02:25:19.311691 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 11 02:25:19.472672 sshd[5921]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:19.478663 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:53252.service: Deactivated successfully. Mar 11 02:25:19.483954 systemd[1]: session-10.scope: Deactivated successfully. Mar 11 02:25:19.487438 systemd-logind[1545]: Session 10 logged out. Waiting for processes to exit. Mar 11 02:25:19.493848 systemd-logind[1545]: Removed session 10. Mar 11 02:25:24.482829 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:53260.service - OpenSSH per-connection server daemon (10.0.0.1:53260). Mar 11 02:25:24.515175 sshd[5951]: Accepted publickey for core from 10.0.0.1 port 53260 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:24.517099 sshd[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:24.522367 systemd-logind[1545]: New session 11 of user core. Mar 11 02:25:24.529635 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 11 02:25:24.652961 sshd[5951]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:24.657727 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:53260.service: Deactivated successfully. Mar 11 02:25:24.661008 systemd[1]: session-11.scope: Deactivated successfully. Mar 11 02:25:24.661037 systemd-logind[1545]: Session 11 logged out. Waiting for processes to exit. Mar 11 02:25:24.662960 systemd-logind[1545]: Removed session 11. Mar 11 02:25:26.770242 kubelet[2645]: E0311 02:25:26.770129 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:29.669789 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:43132.service - OpenSSH per-connection server daemon (10.0.0.1:43132). Mar 11 02:25:29.716170 sshd[6012]: Accepted publickey for core from 10.0.0.1 port 43132 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:29.718672 sshd[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:29.725630 systemd-logind[1545]: New session 12 of user core. Mar 11 02:25:29.735972 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 11 02:25:29.868249 sshd[6012]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:29.872948 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:43132.service: Deactivated successfully. Mar 11 02:25:29.875853 systemd-logind[1545]: Session 12 logged out. Waiting for processes to exit. Mar 11 02:25:29.875906 systemd[1]: session-12.scope: Deactivated successfully. Mar 11 02:25:29.878073 systemd-logind[1545]: Removed session 12. Mar 11 02:25:34.878639 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:43144.service - OpenSSH per-connection server daemon (10.0.0.1:43144). Mar 11 02:25:34.912461 sshd[6048]: Accepted publickey for core from 10.0.0.1 port 43144 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:34.914826 sshd[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:34.920747 systemd-logind[1545]: New session 13 of user core. Mar 11 02:25:34.927819 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 11 02:25:35.060200 sshd[6048]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:35.067740 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:43146.service - OpenSSH per-connection server daemon (10.0.0.1:43146). Mar 11 02:25:35.069769 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:43144.service: Deactivated successfully. Mar 11 02:25:35.072761 systemd[1]: session-13.scope: Deactivated successfully. Mar 11 02:25:35.076562 systemd-logind[1545]: Session 13 logged out. Waiting for processes to exit. Mar 11 02:25:35.080522 systemd-logind[1545]: Removed session 13. Mar 11 02:25:35.119149 sshd[6062]: Accepted publickey for core from 10.0.0.1 port 43146 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:35.121540 sshd[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:35.128659 systemd-logind[1545]: New session 14 of user core. Mar 11 02:25:35.136845 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 11 02:25:35.379201 sshd[6062]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:35.408667 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:43148.service - OpenSSH per-connection server daemon (10.0.0.1:43148). Mar 11 02:25:35.409649 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:43146.service: Deactivated successfully. Mar 11 02:25:35.414583 systemd[1]: session-14.scope: Deactivated successfully. Mar 11 02:25:35.417574 systemd-logind[1545]: Session 14 logged out. Waiting for processes to exit. Mar 11 02:25:35.423026 systemd-logind[1545]: Removed session 14. Mar 11 02:25:35.507124 sshd[6076]: Accepted publickey for core from 10.0.0.1 port 43148 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:35.509880 sshd[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:35.517736 systemd-logind[1545]: New session 15 of user core. Mar 11 02:25:35.529010 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 11 02:25:35.675912 sshd[6076]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:35.683194 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:43148.service: Deactivated successfully. Mar 11 02:25:35.686927 systemd-logind[1545]: Session 15 logged out. Waiting for processes to exit. Mar 11 02:25:35.686997 systemd[1]: session-15.scope: Deactivated successfully. Mar 11 02:25:35.689143 systemd-logind[1545]: Removed session 15. Mar 11 02:25:35.770146 kubelet[2645]: E0311 02:25:35.769990 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:37.769921 kubelet[2645]: E0311 02:25:37.769876 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:40.690729 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:49610.service - OpenSSH per-connection server daemon (10.0.0.1:49610). Mar 11 02:25:40.720441 sshd[6106]: Accepted publickey for core from 10.0.0.1 port 49610 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:40.723051 sshd[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:40.730587 systemd-logind[1545]: New session 16 of user core. Mar 11 02:25:40.740718 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 11 02:25:40.770347 kubelet[2645]: E0311 02:25:40.770272 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:25:40.890523 sshd[6106]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:40.895079 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:49610.service: Deactivated successfully. Mar 11 02:25:40.899275 systemd-logind[1545]: Session 16 logged out. Waiting for processes to exit. Mar 11 02:25:40.899537 systemd[1]: session-16.scope: Deactivated successfully. Mar 11 02:25:40.902516 systemd-logind[1545]: Removed session 16. Mar 11 02:25:45.907405 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:49612.service - OpenSSH per-connection server daemon (10.0.0.1:49612). Mar 11 02:25:45.956594 sshd[6165]: Accepted publickey for core from 10.0.0.1 port 49612 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:45.960975 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:45.969503 systemd-logind[1545]: New session 17 of user core. Mar 11 02:25:45.978050 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 11 02:25:46.163887 sshd[6165]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:46.170890 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:49612.service: Deactivated successfully. Mar 11 02:25:46.176517 systemd-logind[1545]: Session 17 logged out. Waiting for processes to exit. Mar 11 02:25:46.176612 systemd[1]: session-17.scope: Deactivated successfully. Mar 11 02:25:46.182747 systemd-logind[1545]: Removed session 17. Mar 11 02:25:51.185772 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:42170.service - OpenSSH per-connection server daemon (10.0.0.1:42170). Mar 11 02:25:51.217611 sshd[6180]: Accepted publickey for core from 10.0.0.1 port 42170 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:51.219704 sshd[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:51.227714 systemd-logind[1545]: New session 18 of user core. Mar 11 02:25:51.235776 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 11 02:25:51.388798 sshd[6180]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:51.395671 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:42182.service - OpenSSH per-connection server daemon (10.0.0.1:42182). Mar 11 02:25:51.396434 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:42170.service: Deactivated successfully. Mar 11 02:25:51.400997 systemd-logind[1545]: Session 18 logged out. Waiting for processes to exit. Mar 11 02:25:51.402940 systemd[1]: session-18.scope: Deactivated successfully. Mar 11 02:25:51.405262 systemd-logind[1545]: Removed session 18. Mar 11 02:25:51.442017 sshd[6193]: Accepted publickey for core from 10.0.0.1 port 42182 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:51.444164 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:51.450864 systemd-logind[1545]: New session 19 of user core. Mar 11 02:25:51.464679 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 11 02:25:51.851825 sshd[6193]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:51.859809 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:42198.service - OpenSSH per-connection server daemon (10.0.0.1:42198). Mar 11 02:25:51.861740 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:42182.service: Deactivated successfully. Mar 11 02:25:51.866279 systemd[1]: session-19.scope: Deactivated successfully. Mar 11 02:25:51.867719 systemd-logind[1545]: Session 19 logged out. Waiting for processes to exit. Mar 11 02:25:51.870503 systemd-logind[1545]: Removed session 19. Mar 11 02:25:51.935687 sshd[6207]: Accepted publickey for core from 10.0.0.1 port 42198 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:51.938582 sshd[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:51.945730 systemd-logind[1545]: New session 20 of user core. Mar 11 02:25:51.954903 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 11 02:25:52.905723 sshd[6207]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:52.923814 systemd[1]: Started sshd@20-10.0.0.81:22-10.0.0.1:42204.service - OpenSSH per-connection server daemon (10.0.0.1:42204). Mar 11 02:25:52.928112 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:42198.service: Deactivated successfully. Mar 11 02:25:52.937275 systemd[1]: session-20.scope: Deactivated successfully. Mar 11 02:25:52.944700 systemd-logind[1545]: Session 20 logged out. Waiting for processes to exit. Mar 11 02:25:52.948294 systemd-logind[1545]: Removed session 20. Mar 11 02:25:52.992393 sshd[6235]: Accepted publickey for core from 10.0.0.1 port 42204 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:52.994890 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:53.002384 systemd-logind[1545]: New session 21 of user core. Mar 11 02:25:53.013830 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 11 02:25:53.431750 sshd[6235]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:53.445811 systemd[1]: Started sshd@21-10.0.0.81:22-10.0.0.1:42212.service - OpenSSH per-connection server daemon (10.0.0.1:42212). Mar 11 02:25:53.448042 systemd[1]: sshd@20-10.0.0.81:22-10.0.0.1:42204.service: Deactivated successfully. Mar 11 02:25:53.451668 systemd[1]: session-21.scope: Deactivated successfully. Mar 11 02:25:53.452858 systemd-logind[1545]: Session 21 logged out. Waiting for processes to exit. Mar 11 02:25:53.465010 systemd-logind[1545]: Removed session 21. Mar 11 02:25:53.508774 sshd[6249]: Accepted publickey for core from 10.0.0.1 port 42212 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:53.512053 sshd[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:53.524165 systemd-logind[1545]: New session 22 of user core. Mar 11 02:25:53.539026 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 11 02:25:53.779398 sshd[6249]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:53.789622 systemd[1]: sshd@21-10.0.0.81:22-10.0.0.1:42212.service: Deactivated successfully. Mar 11 02:25:53.806087 systemd[1]: session-22.scope: Deactivated successfully. Mar 11 02:25:53.809074 systemd-logind[1545]: Session 22 logged out. Waiting for processes to exit. Mar 11 02:25:53.813942 systemd-logind[1545]: Removed session 22. Mar 11 02:25:56.048120 systemd[1]: run-containerd-runc-k8s.io-0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510-runc.iG9Ebr.mount: Deactivated successfully. Mar 11 02:25:56.903503 containerd[1560]: time="2026-03-11T02:25:56.903397163Z" level=info msg="StopContainer for \"0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510\" with timeout 5 (s)" Mar 11 02:25:56.904408 containerd[1560]: time="2026-03-11T02:25:56.903883226Z" level=info msg="Stop container \"0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510\" with signal terminated" Mar 11 02:25:57.019934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510-rootfs.mount: Deactivated successfully. Mar 11 02:25:57.062498 containerd[1560]: time="2026-03-11T02:25:57.018436928Z" level=info msg="shim disconnected" id=0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510 namespace=k8s.io Mar 11 02:25:57.062498 containerd[1560]: time="2026-03-11T02:25:57.062434944Z" level=warning msg="cleaning up after shim disconnected" id=0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510 namespace=k8s.io Mar 11 02:25:57.062498 containerd[1560]: time="2026-03-11T02:25:57.062458678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:25:57.146646 containerd[1560]: time="2026-03-11T02:25:57.146498883Z" level=info msg="StopContainer for \"0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510\" returns successfully" Mar 11 02:25:57.148363 containerd[1560]: time="2026-03-11T02:25:57.148106582Z" level=info msg="StopPodSandbox for \"3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe\"" Mar 11 02:25:57.151220 containerd[1560]: time="2026-03-11T02:25:57.149973866Z" level=info msg="Container to stop \"eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 11 02:25:57.151220 containerd[1560]: time="2026-03-11T02:25:57.150047472Z" level=info msg="Container to stop \"f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 11 02:25:57.151220 containerd[1560]: time="2026-03-11T02:25:57.150064364Z" level=info msg="Container to stop \"0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 11 02:25:57.151220 containerd[1560]: time="2026-03-11T02:25:57.150077688Z" level=info msg="Container to stop \"d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 11 02:25:57.158635 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe-shm.mount: Deactivated successfully. Mar 11 02:25:57.217469 containerd[1560]: time="2026-03-11T02:25:57.217277808Z" level=info msg="shim disconnected" id=3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe namespace=k8s.io Mar 11 02:25:57.217469 containerd[1560]: time="2026-03-11T02:25:57.217447814Z" level=warning msg="cleaning up after shim disconnected" id=3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe namespace=k8s.io Mar 11 02:25:57.217469 containerd[1560]: time="2026-03-11T02:25:57.217463483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:25:57.219060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe-rootfs.mount: Deactivated successfully. Mar 11 02:25:57.284569 containerd[1560]: time="2026-03-11T02:25:57.284497033Z" level=info msg="TearDown network for sandbox \"3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe\" successfully" Mar 11 02:25:57.284569 containerd[1560]: time="2026-03-11T02:25:57.284554459Z" level=info msg="StopPodSandbox for \"3883bf9dfaf3e062979c924d3db6cacd05984034329ce0a5e4beb085cad08bbe\" returns successfully" Mar 11 02:25:57.397810 kubelet[2645]: I0311 02:25:57.396715 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-cni-net-dir\") pod \"36520cef-30c2-4403-b367-6e5ba591923f\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " Mar 11 02:25:57.397810 kubelet[2645]: I0311 02:25:57.396799 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-cni-log-dir\") pod \"36520cef-30c2-4403-b367-6e5ba591923f\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " Mar 11 02:25:57.397810 kubelet[2645]: I0311 02:25:57.396839 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-flexvol-driver-host\") pod \"36520cef-30c2-4403-b367-6e5ba591923f\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " Mar 11 02:25:57.397810 kubelet[2645]: I0311 02:25:57.396860 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36520cef-30c2-4403-b367-6e5ba591923f-tigera-ca-bundle\") pod \"36520cef-30c2-4403-b367-6e5ba591923f\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " Mar 11 02:25:57.397810 kubelet[2645]: I0311 02:25:57.396881 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-nodeproc\") pod \"36520cef-30c2-4403-b367-6e5ba591923f\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " Mar 11 02:25:57.397810 kubelet[2645]: I0311 02:25:57.396903 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/36520cef-30c2-4403-b367-6e5ba591923f-node-certs\") pod \"36520cef-30c2-4403-b367-6e5ba591923f\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " Mar 11 02:25:57.399021 kubelet[2645]: I0311 02:25:57.396915 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-cni-bin-dir\") pod \"36520cef-30c2-4403-b367-6e5ba591923f\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " Mar 11 02:25:57.399021 kubelet[2645]: I0311 02:25:57.396930 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-xtables-lock\") pod \"36520cef-30c2-4403-b367-6e5ba591923f\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " Mar 11 02:25:57.399021 kubelet[2645]: I0311 02:25:57.396946 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-var-run-calico\") pod \"36520cef-30c2-4403-b367-6e5ba591923f\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " Mar 11 02:25:57.399021 kubelet[2645]: I0311 02:25:57.396965 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z9bt\" (UniqueName: \"kubernetes.io/projected/36520cef-30c2-4403-b367-6e5ba591923f-kube-api-access-7z9bt\") pod \"36520cef-30c2-4403-b367-6e5ba591923f\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " Mar 11 02:25:57.399021 kubelet[2645]: I0311 02:25:57.396979 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-sys-fs\") pod \"36520cef-30c2-4403-b367-6e5ba591923f\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " Mar 11 02:25:57.399021 kubelet[2645]: I0311 02:25:57.396991 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-var-lib-calico\") pod \"36520cef-30c2-4403-b367-6e5ba591923f\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " Mar 11 02:25:57.399459 kubelet[2645]: I0311 02:25:57.397243 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-bpffs\") pod \"36520cef-30c2-4403-b367-6e5ba591923f\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " Mar 11 02:25:57.399459 kubelet[2645]: I0311 02:25:57.397282 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-lib-modules\") pod \"36520cef-30c2-4403-b367-6e5ba591923f\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " Mar 11 02:25:57.399459 kubelet[2645]: I0311 02:25:57.397380 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-policysync\") pod \"36520cef-30c2-4403-b367-6e5ba591923f\" (UID: \"36520cef-30c2-4403-b367-6e5ba591923f\") " Mar 11 02:25:57.399459 kubelet[2645]: I0311 02:25:57.397521 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-policysync" (OuterVolumeSpecName: "policysync") pod "36520cef-30c2-4403-b367-6e5ba591923f" (UID: "36520cef-30c2-4403-b367-6e5ba591923f"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:25:57.399459 kubelet[2645]: I0311 02:25:57.397576 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "36520cef-30c2-4403-b367-6e5ba591923f" (UID: "36520cef-30c2-4403-b367-6e5ba591923f"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:25:57.399677 kubelet[2645]: I0311 02:25:57.397600 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "36520cef-30c2-4403-b367-6e5ba591923f" (UID: "36520cef-30c2-4403-b367-6e5ba591923f"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:25:57.399677 kubelet[2645]: I0311 02:25:57.398440 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "36520cef-30c2-4403-b367-6e5ba591923f" (UID: "36520cef-30c2-4403-b367-6e5ba591923f"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:25:57.399677 kubelet[2645]: I0311 02:25:57.398486 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-nodeproc" (OuterVolumeSpecName: "nodeproc") pod "36520cef-30c2-4403-b367-6e5ba591923f" (UID: "36520cef-30c2-4403-b367-6e5ba591923f"). InnerVolumeSpecName "nodeproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:25:57.399677 kubelet[2645]: I0311 02:25:57.395419 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "36520cef-30c2-4403-b367-6e5ba591923f" (UID: "36520cef-30c2-4403-b367-6e5ba591923f"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:25:57.399677 kubelet[2645]: I0311 02:25:57.399406 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "36520cef-30c2-4403-b367-6e5ba591923f" (UID: "36520cef-30c2-4403-b367-6e5ba591923f"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:25:57.399874 kubelet[2645]: I0311 02:25:57.399611 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "36520cef-30c2-4403-b367-6e5ba591923f" (UID: "36520cef-30c2-4403-b367-6e5ba591923f"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:25:57.410290 kubelet[2645]: I0311 02:25:57.409627 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "36520cef-30c2-4403-b367-6e5ba591923f" (UID: "36520cef-30c2-4403-b367-6e5ba591923f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:25:57.410290 kubelet[2645]: I0311 02:25:57.409872 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36520cef-30c2-4403-b367-6e5ba591923f-kube-api-access-7z9bt" (OuterVolumeSpecName: "kube-api-access-7z9bt") pod "36520cef-30c2-4403-b367-6e5ba591923f" (UID: "36520cef-30c2-4403-b367-6e5ba591923f"). InnerVolumeSpecName "kube-api-access-7z9bt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 11 02:25:57.410290 kubelet[2645]: I0311 02:25:57.409942 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-sys-fs" (OuterVolumeSpecName: "sys-fs") pod "36520cef-30c2-4403-b367-6e5ba591923f" (UID: "36520cef-30c2-4403-b367-6e5ba591923f"). InnerVolumeSpecName "sys-fs". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:25:57.410290 kubelet[2645]: I0311 02:25:57.409991 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-bpffs" (OuterVolumeSpecName: "bpffs") pod "36520cef-30c2-4403-b367-6e5ba591923f" (UID: "36520cef-30c2-4403-b367-6e5ba591923f"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:25:57.410290 kubelet[2645]: I0311 02:25:57.410021 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "36520cef-30c2-4403-b367-6e5ba591923f" (UID: "36520cef-30c2-4403-b367-6e5ba591923f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:25:57.411779 kubelet[2645]: I0311 02:25:57.411728 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36520cef-30c2-4403-b367-6e5ba591923f-node-certs" (OuterVolumeSpecName: "node-certs") pod "36520cef-30c2-4403-b367-6e5ba591923f" (UID: "36520cef-30c2-4403-b367-6e5ba591923f"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 11 02:25:57.421416 kubelet[2645]: I0311 02:25:57.420914 2645 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36520cef-30c2-4403-b367-6e5ba591923f-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "36520cef-30c2-4403-b367-6e5ba591923f" (UID: "36520cef-30c2-4403-b367-6e5ba591923f"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 11 02:25:57.481578 kubelet[2645]: I0311 02:25:57.480053 2645 scope.go:117] "RemoveContainer" containerID="0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510" Mar 11 02:25:57.484266 containerd[1560]: time="2026-03-11T02:25:57.484151482Z" level=info msg="RemoveContainer for \"0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510\"" Mar 11 02:25:57.491468 containerd[1560]: time="2026-03-11T02:25:57.491433130Z" level=info msg="RemoveContainer for \"0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510\" returns successfully" Mar 11 02:25:57.492274 kubelet[2645]: I0311 02:25:57.492230 2645 scope.go:117] "RemoveContainer" containerID="eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274" Mar 11 02:25:57.495100 containerd[1560]: time="2026-03-11T02:25:57.495048680Z" level=info msg="RemoveContainer for \"eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274\"" Mar 11 02:25:57.503803 containerd[1560]: time="2026-03-11T02:25:57.503142999Z" level=info msg="RemoveContainer for \"eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274\" returns successfully" Mar 11 02:25:57.503919 kubelet[2645]: I0311 02:25:57.503538 2645 scope.go:117] "RemoveContainer" containerID="d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847" Mar 11 02:25:57.508960 containerd[1560]: time="2026-03-11T02:25:57.506602556Z" level=info msg="RemoveContainer for \"d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847\"" Mar 11 02:25:57.509171 kubelet[2645]: I0311 02:25:57.509140 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f7005cb6-5a5a-40d8-a19b-8a1876d7048a-node-certs\") pod \"calico-node-9p9sm\" (UID: \"f7005cb6-5a5a-40d8-a19b-8a1876d7048a\") " pod="calico-system/calico-node-9p9sm" Mar 11 02:25:57.513724 kubelet[2645]: I0311 02:25:57.513688 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f7005cb6-5a5a-40d8-a19b-8a1876d7048a-policysync\") pod \"calico-node-9p9sm\" (UID: \"f7005cb6-5a5a-40d8-a19b-8a1876d7048a\") " pod="calico-system/calico-node-9p9sm" Mar 11 02:25:57.513836 kubelet[2645]: I0311 02:25:57.513770 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f7005cb6-5a5a-40d8-a19b-8a1876d7048a-cni-net-dir\") pod \"calico-node-9p9sm\" (UID: \"f7005cb6-5a5a-40d8-a19b-8a1876d7048a\") " pod="calico-system/calico-node-9p9sm" Mar 11 02:25:57.513836 kubelet[2645]: I0311 02:25:57.513797 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/f7005cb6-5a5a-40d8-a19b-8a1876d7048a-nodeproc\") pod \"calico-node-9p9sm\" (UID: \"f7005cb6-5a5a-40d8-a19b-8a1876d7048a\") " pod="calico-system/calico-node-9p9sm" Mar 11 02:25:57.513836 kubelet[2645]: I0311 02:25:57.513820 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7005cb6-5a5a-40d8-a19b-8a1876d7048a-tigera-ca-bundle\") pod \"calico-node-9p9sm\" (UID: \"f7005cb6-5a5a-40d8-a19b-8a1876d7048a\") " pod="calico-system/calico-node-9p9sm" Mar 11 02:25:57.514033 kubelet[2645]: I0311 02:25:57.513846 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f7005cb6-5a5a-40d8-a19b-8a1876d7048a-flexvol-driver-host\") pod \"calico-node-9p9sm\" (UID: \"f7005cb6-5a5a-40d8-a19b-8a1876d7048a\") " pod="calico-system/calico-node-9p9sm" Mar 11 02:25:57.514033 kubelet[2645]: I0311 02:25:57.513873 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7005cb6-5a5a-40d8-a19b-8a1876d7048a-lib-modules\") pod \"calico-node-9p9sm\" (UID: \"f7005cb6-5a5a-40d8-a19b-8a1876d7048a\") " pod="calico-system/calico-node-9p9sm" Mar 11 02:25:57.514033 kubelet[2645]: I0311 02:25:57.513898 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7005cb6-5a5a-40d8-a19b-8a1876d7048a-xtables-lock\") pod \"calico-node-9p9sm\" (UID: \"f7005cb6-5a5a-40d8-a19b-8a1876d7048a\") " pod="calico-system/calico-node-9p9sm" Mar 11 02:25:57.514033 kubelet[2645]: I0311 02:25:57.513916 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/f7005cb6-5a5a-40d8-a19b-8a1876d7048a-bpffs\") pod \"calico-node-9p9sm\" (UID: \"f7005cb6-5a5a-40d8-a19b-8a1876d7048a\") " pod="calico-system/calico-node-9p9sm" Mar 11 02:25:57.514033 kubelet[2645]: I0311 02:25:57.513938 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f7005cb6-5a5a-40d8-a19b-8a1876d7048a-var-run-calico\") pod \"calico-node-9p9sm\" (UID: \"f7005cb6-5a5a-40d8-a19b-8a1876d7048a\") " pod="calico-system/calico-node-9p9sm" Mar 11 02:25:57.514462 kubelet[2645]: I0311 02:25:57.513957 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f7005cb6-5a5a-40d8-a19b-8a1876d7048a-cni-log-dir\") pod \"calico-node-9p9sm\" (UID: \"f7005cb6-5a5a-40d8-a19b-8a1876d7048a\") " pod="calico-system/calico-node-9p9sm" Mar 11 02:25:57.514462 kubelet[2645]: I0311 02:25:57.513987 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f7005cb6-5a5a-40d8-a19b-8a1876d7048a-var-lib-calico\") pod \"calico-node-9p9sm\" (UID: \"f7005cb6-5a5a-40d8-a19b-8a1876d7048a\") " pod="calico-system/calico-node-9p9sm" Mar 11 02:25:57.514462 kubelet[2645]: I0311 02:25:57.514009 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnk64\" (UniqueName: \"kubernetes.io/projected/f7005cb6-5a5a-40d8-a19b-8a1876d7048a-kube-api-access-gnk64\") pod \"calico-node-9p9sm\" (UID: \"f7005cb6-5a5a-40d8-a19b-8a1876d7048a\") " pod="calico-system/calico-node-9p9sm" Mar 11 02:25:57.514462 kubelet[2645]: I0311 02:25:57.514029 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f7005cb6-5a5a-40d8-a19b-8a1876d7048a-cni-bin-dir\") pod \"calico-node-9p9sm\" (UID: \"f7005cb6-5a5a-40d8-a19b-8a1876d7048a\") " pod="calico-system/calico-node-9p9sm" Mar 11 02:25:57.514462 kubelet[2645]: I0311 02:25:57.514051 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/f7005cb6-5a5a-40d8-a19b-8a1876d7048a-sys-fs\") pod \"calico-node-9p9sm\" (UID: \"f7005cb6-5a5a-40d8-a19b-8a1876d7048a\") " pod="calico-system/calico-node-9p9sm" Mar 11 02:25:57.514462 kubelet[2645]: I0311 02:25:57.514085 2645 reconciler_common.go:299] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:57.514865 kubelet[2645]: I0311 02:25:57.514098 2645 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36520cef-30c2-4403-b367-6e5ba591923f-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:57.514865 kubelet[2645]: I0311 02:25:57.514110 2645 reconciler_common.go:299] "Volume detached for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-nodeproc\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:57.514865 kubelet[2645]: I0311 02:25:57.514121 2645 reconciler_common.go:299] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/36520cef-30c2-4403-b367-6e5ba591923f-node-certs\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:57.514865 kubelet[2645]: I0311 02:25:57.514133 2645 reconciler_common.go:299] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:57.514865 kubelet[2645]: I0311 02:25:57.514145 2645 reconciler_common.go:299] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-var-run-calico\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:57.514865 kubelet[2645]: I0311 02:25:57.514157 2645 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:57.514865 kubelet[2645]: I0311 02:25:57.514167 2645 reconciler_common.go:299] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-bpffs\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:57.514865 kubelet[2645]: I0311 02:25:57.514227 2645 reconciler_common.go:299] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-policysync\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:57.515173 kubelet[2645]: I0311 02:25:57.514240 2645 reconciler_common.go:299] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:57.515173 kubelet[2645]: I0311 02:25:57.514257 2645 reconciler_common.go:299] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:57.515173 kubelet[2645]: I0311 02:25:57.514271 2645 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:57.515173 kubelet[2645]: I0311 02:25:57.514283 2645 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7z9bt\" (UniqueName: \"kubernetes.io/projected/36520cef-30c2-4403-b367-6e5ba591923f-kube-api-access-7z9bt\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:57.515173 kubelet[2645]: I0311 02:25:57.514296 2645 reconciler_common.go:299] "Volume detached for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-sys-fs\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:57.515173 kubelet[2645]: I0311 02:25:57.514374 2645 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36520cef-30c2-4403-b367-6e5ba591923f-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 11 02:25:57.516476 containerd[1560]: time="2026-03-11T02:25:57.516432287Z" level=info msg="RemoveContainer for \"d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847\" returns successfully" Mar 11 02:25:57.516815 kubelet[2645]: I0311 02:25:57.516712 2645 scope.go:117] "RemoveContainer" containerID="f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1" Mar 11 02:25:57.518231 containerd[1560]: time="2026-03-11T02:25:57.518093387Z" level=info msg="RemoveContainer for \"f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1\"" Mar 11 02:25:57.538026 containerd[1560]: time="2026-03-11T02:25:57.537865626Z" level=info msg="RemoveContainer for \"f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1\" returns successfully" Mar 11 02:25:57.538562 kubelet[2645]: I0311 02:25:57.538404 2645 scope.go:117] "RemoveContainer" containerID="0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510" Mar 11 02:25:57.554626 containerd[1560]: time="2026-03-11T02:25:57.544950745Z" level=error msg="ContainerStatus for \"0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510\": not found" Mar 11 02:25:57.568591 kubelet[2645]: E0311 02:25:57.568478 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510\": not found" containerID="0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510" Mar 11 02:25:57.587297 kubelet[2645]: I0311 02:25:57.569518 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510"} err="failed to get container status \"0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ce5d45839406e3007f1e4802e2b98d4c296a8e7277ff0cc98590cb4a0469510\": not found" Mar 11 02:25:57.587297 kubelet[2645]: I0311 02:25:57.585980 2645 scope.go:117] "RemoveContainer" containerID="eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274" Mar 11 02:25:57.587297 kubelet[2645]: E0311 02:25:57.586629 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274\": not found" containerID="eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274" Mar 11 02:25:57.587297 kubelet[2645]: I0311 02:25:57.586664 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274"} err="failed to get container status \"eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274\": rpc error: code = NotFound desc = an error occurred when try to find container \"eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274\": not found" Mar 11 02:25:57.587297 kubelet[2645]: I0311 02:25:57.586690 2645 scope.go:117] "RemoveContainer" containerID="d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847" Mar 11 02:25:57.587297 kubelet[2645]: E0311 02:25:57.586957 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847\": not found" containerID="d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847" Mar 11 02:25:57.587685 containerd[1560]: time="2026-03-11T02:25:57.586454070Z" level=error msg="ContainerStatus for \"eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eec319d3c8301b78efb9c2110d28c8c4c28e7bba3b61c127e3b6881b4ca9b274\": not found" Mar 11 02:25:57.587685 containerd[1560]: time="2026-03-11T02:25:57.586863723Z" level=error msg="ContainerStatus for \"d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847\": not found" Mar 11 02:25:57.587685 containerd[1560]: time="2026-03-11T02:25:57.587138383Z" level=error msg="ContainerStatus for \"f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1\": not found" Mar 11 02:25:57.587792 kubelet[2645]: I0311 02:25:57.586977 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847"} err="failed to get container status \"d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847\": rpc error: code = NotFound desc = an error occurred when try to find container \"d91679f56c73d51900b45b0d84878a3eb99f4e800d79f50a1c664fb141032847\": not found" Mar 11 02:25:57.587792 kubelet[2645]: I0311 02:25:57.586994 2645 scope.go:117] "RemoveContainer" containerID="f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1" Mar 11 02:25:57.587792 kubelet[2645]: E0311 02:25:57.587286 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1\": not found" containerID="f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1" Mar 11 02:25:57.587792 kubelet[2645]: I0311 02:25:57.587371 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1"} err="failed to get container status \"f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0dbb5cc8bd7d5b625f3ac9dc6ce1e47b0eb02e02c7af05e40f1833e4af2d4c1\": not found" Mar 11 02:25:57.705745 systemd[1]: var-lib-kubelet-pods-36520cef\x2d30c2\x2d4403\x2db367\x2d6e5ba591923f-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-7.mount: Deactivated successfully. Mar 11 02:25:57.706040 systemd[1]: var-lib-kubelet-pods-36520cef\x2d30c2\x2d4403\x2db367\x2d6e5ba591923f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7z9bt.mount: Deactivated successfully. Mar 11 02:25:57.706934 systemd[1]: var-lib-kubelet-pods-36520cef\x2d30c2\x2d4403\x2db367\x2d6e5ba591923f-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Mar 11 02:25:57.734128 containerd[1560]: time="2026-03-11T02:25:57.730868462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9p9sm,Uid:f7005cb6-5a5a-40d8-a19b-8a1876d7048a,Namespace:calico-system,Attempt:0,}" Mar 11 02:25:57.774659 kubelet[2645]: I0311 02:25:57.774267 2645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36520cef-30c2-4403-b367-6e5ba591923f" path="/var/lib/kubelet/pods/36520cef-30c2-4403-b367-6e5ba591923f/volumes" Mar 11 02:25:57.793804 containerd[1560]: time="2026-03-11T02:25:57.791262122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:25:57.793804 containerd[1560]: time="2026-03-11T02:25:57.793484475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:25:57.793804 containerd[1560]: time="2026-03-11T02:25:57.793516474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:57.793804 containerd[1560]: time="2026-03-11T02:25:57.793664249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:25:57.899638 containerd[1560]: time="2026-03-11T02:25:57.899499992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9p9sm,Uid:f7005cb6-5a5a-40d8-a19b-8a1876d7048a,Namespace:calico-system,Attempt:0,} returns sandbox id \"e42074f3a92f780bc7713f7be83907bc7f8901c27c98b30ae1293e6d558f9c21\"" Mar 11 02:25:57.909686 containerd[1560]: time="2026-03-11T02:25:57.909583868Z" level=info msg="CreateContainer within sandbox \"e42074f3a92f780bc7713f7be83907bc7f8901c27c98b30ae1293e6d558f9c21\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 11 02:25:57.957019 containerd[1560]: time="2026-03-11T02:25:57.956727654Z" level=info msg="CreateContainer within sandbox \"e42074f3a92f780bc7713f7be83907bc7f8901c27c98b30ae1293e6d558f9c21\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aae154102f667cf453cb4ec46176e203ddddda9fa33459ae270c8f9117b6ea85\"" Mar 11 02:25:57.958390 containerd[1560]: time="2026-03-11T02:25:57.957775653Z" level=info msg="StartContainer for \"aae154102f667cf453cb4ec46176e203ddddda9fa33459ae270c8f9117b6ea85\"" Mar 11 02:25:58.088445 containerd[1560]: time="2026-03-11T02:25:58.087758257Z" level=info msg="StartContainer for \"aae154102f667cf453cb4ec46176e203ddddda9fa33459ae270c8f9117b6ea85\" returns successfully" Mar 11 02:25:58.259428 containerd[1560]: time="2026-03-11T02:25:58.258931015Z" level=info msg="shim disconnected" id=aae154102f667cf453cb4ec46176e203ddddda9fa33459ae270c8f9117b6ea85 namespace=k8s.io Mar 11 02:25:58.259428 containerd[1560]: time="2026-03-11T02:25:58.259004722Z" level=warning msg="cleaning up after shim disconnected" id=aae154102f667cf453cb4ec46176e203ddddda9fa33459ae270c8f9117b6ea85 namespace=k8s.io Mar 11 02:25:58.259428 containerd[1560]: time="2026-03-11T02:25:58.259021854Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:25:58.536995 containerd[1560]: time="2026-03-11T02:25:58.536814148Z" level=info msg="CreateContainer within sandbox \"e42074f3a92f780bc7713f7be83907bc7f8901c27c98b30ae1293e6d558f9c21\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 11 02:25:58.575051 containerd[1560]: time="2026-03-11T02:25:58.574894045Z" level=info msg="CreateContainer within sandbox \"e42074f3a92f780bc7713f7be83907bc7f8901c27c98b30ae1293e6d558f9c21\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"9d00557a47d9f31fbe620c814e52a5f74d44de02871fb9c0b34de2fba4ad844d\"" Mar 11 02:25:58.577030 containerd[1560]: time="2026-03-11T02:25:58.575823039Z" level=info msg="StartContainer for \"9d00557a47d9f31fbe620c814e52a5f74d44de02871fb9c0b34de2fba4ad844d\"" Mar 11 02:25:58.695739 containerd[1560]: time="2026-03-11T02:25:58.695695646Z" level=info msg="StartContainer for \"9d00557a47d9f31fbe620c814e52a5f74d44de02871fb9c0b34de2fba4ad844d\" returns successfully" Mar 11 02:25:58.787153 systemd[1]: Started sshd@22-10.0.0.81:22-10.0.0.1:37062.service - OpenSSH per-connection server daemon (10.0.0.1:37062). Mar 11 02:25:58.810844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d00557a47d9f31fbe620c814e52a5f74d44de02871fb9c0b34de2fba4ad844d-rootfs.mount: Deactivated successfully. Mar 11 02:25:58.834783 containerd[1560]: time="2026-03-11T02:25:58.834583260Z" level=info msg="shim disconnected" id=9d00557a47d9f31fbe620c814e52a5f74d44de02871fb9c0b34de2fba4ad844d namespace=k8s.io Mar 11 02:25:58.834783 containerd[1560]: time="2026-03-11T02:25:58.834689117Z" level=warning msg="cleaning up after shim disconnected" id=9d00557a47d9f31fbe620c814e52a5f74d44de02871fb9c0b34de2fba4ad844d namespace=k8s.io Mar 11 02:25:58.834783 containerd[1560]: time="2026-03-11T02:25:58.834705858Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:25:58.906782 sshd[6559]: Accepted publickey for core from 10.0.0.1 port 37062 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:58.911556 sshd[6559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:58.921739 systemd-logind[1545]: New session 23 of user core. Mar 11 02:25:58.938053 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 11 02:25:59.162430 sshd[6559]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:59.167731 systemd[1]: sshd@22-10.0.0.81:22-10.0.0.1:37062.service: Deactivated successfully. Mar 11 02:25:59.172599 systemd-logind[1545]: Session 23 logged out. Waiting for processes to exit. Mar 11 02:25:59.173496 systemd[1]: session-23.scope: Deactivated successfully. Mar 11 02:25:59.178831 systemd-logind[1545]: Removed session 23. Mar 11 02:25:59.536377 containerd[1560]: time="2026-03-11T02:25:59.536223885Z" level=info msg="CreateContainer within sandbox \"e42074f3a92f780bc7713f7be83907bc7f8901c27c98b30ae1293e6d558f9c21\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 11 02:25:59.559848 containerd[1560]: time="2026-03-11T02:25:59.559694029Z" level=info msg="CreateContainer within sandbox \"e42074f3a92f780bc7713f7be83907bc7f8901c27c98b30ae1293e6d558f9c21\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6f1fc103e00a51eee369b23ac8dbe5e84dd6a5bd615a296d8539f3298af0a3ab\"" Mar 11 02:25:59.560892 containerd[1560]: time="2026-03-11T02:25:59.560562725Z" level=info msg="StartContainer for \"6f1fc103e00a51eee369b23ac8dbe5e84dd6a5bd615a296d8539f3298af0a3ab\"" Mar 11 02:25:59.641838 containerd[1560]: time="2026-03-11T02:25:59.641743816Z" level=info msg="StartContainer for \"6f1fc103e00a51eee369b23ac8dbe5e84dd6a5bd615a296d8539f3298af0a3ab\" returns successfully" Mar 11 02:26:00.766130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f1fc103e00a51eee369b23ac8dbe5e84dd6a5bd615a296d8539f3298af0a3ab-rootfs.mount: Deactivated successfully. Mar 11 02:26:00.773285 containerd[1560]: time="2026-03-11T02:26:00.773153098Z" level=info msg="shim disconnected" id=6f1fc103e00a51eee369b23ac8dbe5e84dd6a5bd615a296d8539f3298af0a3ab namespace=k8s.io Mar 11 02:26:00.773764 containerd[1560]: time="2026-03-11T02:26:00.773279973Z" level=warning msg="cleaning up after shim disconnected" id=6f1fc103e00a51eee369b23ac8dbe5e84dd6a5bd615a296d8539f3298af0a3ab namespace=k8s.io Mar 11 02:26:00.773764 containerd[1560]: time="2026-03-11T02:26:00.773295311Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:26:01.565475 containerd[1560]: time="2026-03-11T02:26:01.565246400Z" level=info msg="CreateContainer within sandbox \"e42074f3a92f780bc7713f7be83907bc7f8901c27c98b30ae1293e6d558f9c21\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 11 02:26:01.591286 containerd[1560]: time="2026-03-11T02:26:01.591143573Z" level=info msg="CreateContainer within sandbox \"e42074f3a92f780bc7713f7be83907bc7f8901c27c98b30ae1293e6d558f9c21\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a0fe515f444dffb604effce5ad5e4a90a59d1cb594cc43c975f3594b18031d58\"" Mar 11 02:26:01.594035 containerd[1560]: time="2026-03-11T02:26:01.591998340Z" level=info msg="StartContainer for \"a0fe515f444dffb604effce5ad5e4a90a59d1cb594cc43c975f3594b18031d58\"" Mar 11 02:26:01.700910 containerd[1560]: time="2026-03-11T02:26:01.700822956Z" level=info msg="StartContainer for \"a0fe515f444dffb604effce5ad5e4a90a59d1cb594cc43c975f3594b18031d58\" returns successfully" Mar 11 02:26:01.770087 kubelet[2645]: E0311 02:26:01.769967 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:02.606254 kubelet[2645]: I0311 02:26:02.603472 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9p9sm" podStartSLOduration=5.60345611 podStartE2EDuration="5.60345611s" podCreationTimestamp="2026-03-11 02:25:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:26:02.603096191 +0000 UTC m=+105.019681643" watchObservedRunningTime="2026-03-11 02:26:02.60345611 +0000 UTC m=+105.020041564" Mar 11 02:26:02.769565 kubelet[2645]: E0311 02:26:02.769454 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:04.179251 systemd[1]: Started sshd@23-10.0.0.81:22-10.0.0.1:37074.service - OpenSSH per-connection server daemon (10.0.0.1:37074). Mar 11 02:26:04.325845 sshd[6906]: Accepted publickey for core from 10.0.0.1 port 37074 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:26:04.328075 sshd[6906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:26:04.340672 systemd-logind[1545]: New session 24 of user core. Mar 11 02:26:04.347383 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 11 02:26:04.829703 sshd[6906]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:04.834059 systemd[1]: sshd@23-10.0.0.81:22-10.0.0.1:37074.service: Deactivated successfully. Mar 11 02:26:04.839627 systemd-logind[1545]: Session 24 logged out. Waiting for processes to exit. Mar 11 02:26:04.839713 systemd[1]: session-24.scope: Deactivated successfully. Mar 11 02:26:04.841601 systemd-logind[1545]: Removed session 24. Mar 11 02:26:05.746444 systemd-resolved[1458]: Under memory pressure, flushing caches. Mar 11 02:26:05.749758 systemd-journald[1169]: Under memory pressure, flushing caches. Mar 11 02:26:05.746490 systemd-resolved[1458]: Flushed all caches. Mar 11 02:26:07.782762 systemd-resolved[1458]: Under memory pressure, flushing caches. Mar 11 02:26:07.782771 systemd-resolved[1458]: Flushed all caches. Mar 11 02:26:07.787434 systemd-journald[1169]: Under memory pressure, flushing caches. Mar 11 02:26:09.848765 systemd[1]: Started sshd@24-10.0.0.81:22-10.0.0.1:33046.service - OpenSSH per-connection server daemon (10.0.0.1:33046). Mar 11 02:26:09.889798 sshd[7030]: Accepted publickey for core from 10.0.0.1 port 33046 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:26:09.892237 sshd[7030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:26:09.899434 systemd-logind[1545]: New session 25 of user core. Mar 11 02:26:09.907768 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 11 02:26:10.041035 sshd[7030]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:10.046820 systemd[1]: sshd@24-10.0.0.81:22-10.0.0.1:33046.service: Deactivated successfully. Mar 11 02:26:10.049797 systemd[1]: session-25.scope: Deactivated successfully. Mar 11 02:26:10.049934 systemd-logind[1545]: Session 25 logged out. Waiting for processes to exit. Mar 11 02:26:10.052043 systemd-logind[1545]: Removed session 25.