Mar 12 01:39:54.076481 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Mar 11 23:23:33 -00 2026 Mar 12 01:39:54.076511 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:39:54.076528 kernel: BIOS-provided physical RAM map: Mar 12 01:39:54.076538 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 12 01:39:54.076547 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 12 01:39:54.076594 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 12 01:39:54.076607 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 12 01:39:54.076617 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 12 01:39:54.076627 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 12 01:39:54.076640 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 12 01:39:54.076650 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 12 01:39:54.076660 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 12 01:39:54.076669 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 12 01:39:54.076679 kernel: NX (Execute Disable) protection: active Mar 12 01:39:54.076691 kernel: APIC: Static calls initialized Mar 12 01:39:54.076705 kernel: SMBIOS 2.8 present. Mar 12 01:39:54.076716 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 12 01:39:54.076774 kernel: Hypervisor detected: KVM Mar 12 01:39:54.076784 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 12 01:39:54.076794 kernel: kvm-clock: using sched offset of 4380028637 cycles Mar 12 01:39:54.076804 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 01:39:54.076816 kernel: tsc: Detected 2445.426 MHz processor Mar 12 01:39:54.076826 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 12 01:39:54.076837 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 12 01:39:54.076848 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 12 01:39:54.076863 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 12 01:39:54.076874 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 12 01:39:54.076884 kernel: Using GB pages for direct mapping Mar 12 01:39:54.076894 kernel: ACPI: Early table checksum verification disabled Mar 12 01:39:54.076905 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 12 01:39:54.076915 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:39:54.076926 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:39:54.076936 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:39:54.076950 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 12 01:39:54.076961 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:39:54.076971 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:39:54.076982 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:39:54.076993 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:39:54.077004 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 12 01:39:54.077016 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 12 01:39:54.077032 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 12 01:39:54.077048 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 12 01:39:54.077058 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 12 01:39:54.077068 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 12 01:39:54.077078 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 12 01:39:54.077088 kernel: No NUMA configuration found Mar 12 01:39:54.077098 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 12 01:39:54.077108 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 12 01:39:54.077124 kernel: Zone ranges: Mar 12 01:39:54.077134 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 12 01:39:54.077145 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 12 01:39:54.077156 kernel: Normal empty Mar 12 01:39:54.077167 kernel: Movable zone start for each node Mar 12 01:39:54.077179 kernel: Early memory node ranges Mar 12 01:39:54.077189 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 12 01:39:54.077200 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 12 01:39:54.077211 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 12 01:39:54.077225 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:39:54.077237 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 12 01:39:54.077248 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 12 01:39:54.077259 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 12 01:39:54.077269 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 12 01:39:54.077280 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 12 01:39:54.077291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 12 01:39:54.077301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 12 01:39:54.077312 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 12 01:39:54.077326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 12 01:39:54.077336 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 12 01:39:54.077346 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 12 01:39:54.077357 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 12 01:39:54.077368 kernel: TSC deadline timer available Mar 12 01:39:54.077379 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 12 01:39:54.077390 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 12 01:39:54.077400 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 12 01:39:54.077411 kernel: kvm-guest: setup PV sched yield Mar 12 01:39:54.077423 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 12 01:39:54.077436 kernel: Booting paravirtualized kernel on KVM Mar 12 01:39:54.077447 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 12 01:39:54.077459 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 12 01:39:54.077469 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 12 01:39:54.077480 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 12 01:39:54.077491 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 12 01:39:54.077500 kernel: kvm-guest: PV spinlocks enabled Mar 12 01:39:54.077510 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 12 01:39:54.077522 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:39:54.077536 kernel: random: crng init done Mar 12 01:39:54.077546 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 01:39:54.077586 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 01:39:54.077594 kernel: Fallback order for Node 0: 0 Mar 12 01:39:54.077601 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 12 01:39:54.077607 kernel: Policy zone: DMA32 Mar 12 01:39:54.077613 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 01:39:54.077619 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 12 01:39:54.077629 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 12 01:39:54.077635 kernel: ftrace: allocating 37996 entries in 149 pages Mar 12 01:39:54.077641 kernel: ftrace: allocated 149 pages with 4 groups Mar 12 01:39:54.077647 kernel: Dynamic Preempt: voluntary Mar 12 01:39:54.077653 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 01:39:54.077660 kernel: rcu: RCU event tracing is enabled. Mar 12 01:39:54.077666 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 12 01:39:54.077673 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 01:39:54.077679 kernel: Rude variant of Tasks RCU enabled. Mar 12 01:39:54.077687 kernel: Tracing variant of Tasks RCU enabled. Mar 12 01:39:54.077693 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 01:39:54.077700 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 12 01:39:54.077846 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 12 01:39:54.077858 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 01:39:54.077870 kernel: Console: colour VGA+ 80x25 Mar 12 01:39:54.077877 kernel: printk: console [ttyS0] enabled Mar 12 01:39:54.077883 kernel: ACPI: Core revision 20230628 Mar 12 01:39:54.077889 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 12 01:39:54.077895 kernel: APIC: Switch to symmetric I/O mode setup Mar 12 01:39:54.077906 kernel: x2apic enabled Mar 12 01:39:54.077912 kernel: APIC: Switched APIC routing to: physical x2apic Mar 12 01:39:54.077918 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 12 01:39:54.077925 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 12 01:39:54.077931 kernel: kvm-guest: setup PV IPIs Mar 12 01:39:54.077971 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 12 01:39:54.077996 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 12 01:39:54.078013 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 12 01:39:54.078026 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 12 01:39:54.078039 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 12 01:39:54.078051 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 12 01:39:54.078068 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 12 01:39:54.078081 kernel: Spectre V2 : Mitigation: Retpolines Mar 12 01:39:54.078092 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 12 01:39:54.078103 kernel: Speculative Store Bypass: Vulnerable Mar 12 01:39:54.078113 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 12 01:39:54.078129 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 12 01:39:54.078139 kernel: active return thunk: srso_alias_return_thunk Mar 12 01:39:54.078150 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 12 01:39:54.078160 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 12 01:39:54.078171 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 12 01:39:54.078182 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 12 01:39:54.078189 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 12 01:39:54.078196 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 12 01:39:54.078205 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 12 01:39:54.078211 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 12 01:39:54.078218 kernel: Freeing SMP alternatives memory: 32K Mar 12 01:39:54.078224 kernel: pid_max: default: 32768 minimum: 301 Mar 12 01:39:54.078230 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 12 01:39:54.078237 kernel: landlock: Up and running. Mar 12 01:39:54.078243 kernel: SELinux: Initializing. Mar 12 01:39:54.078253 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:39:54.078265 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:39:54.078282 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 12 01:39:54.078294 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:39:54.078305 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:39:54.078317 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:39:54.078330 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 12 01:39:54.078341 kernel: signal: max sigframe size: 1776 Mar 12 01:39:54.078351 kernel: rcu: Hierarchical SRCU implementation. Mar 12 01:39:54.078362 kernel: rcu: Max phase no-delay instances is 400. Mar 12 01:39:54.078373 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 12 01:39:54.078387 kernel: smp: Bringing up secondary CPUs ... Mar 12 01:39:54.078398 kernel: smpboot: x86: Booting SMP configuration: Mar 12 01:39:54.078408 kernel: .... node #0, CPUs: #1 #2 #3 Mar 12 01:39:54.078419 kernel: smp: Brought up 1 node, 4 CPUs Mar 12 01:39:54.078430 kernel: smpboot: Max logical packages: 1 Mar 12 01:39:54.078440 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 12 01:39:54.078451 kernel: devtmpfs: initialized Mar 12 01:39:54.078461 kernel: x86/mm: Memory block size: 128MB Mar 12 01:39:54.078472 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 01:39:54.078485 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 12 01:39:54.078497 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 01:39:54.078508 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 01:39:54.078514 kernel: audit: initializing netlink subsys (disabled) Mar 12 01:39:54.078521 kernel: audit: type=2000 audit(1773279593.318:1): state=initialized audit_enabled=0 res=1 Mar 12 01:39:54.078527 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 01:39:54.078533 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 12 01:39:54.078540 kernel: cpuidle: using governor menu Mar 12 01:39:54.078546 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 01:39:54.078592 kernel: dca service started, version 1.12.1 Mar 12 01:39:54.078601 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 12 01:39:54.078607 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 12 01:39:54.078614 kernel: PCI: Using configuration type 1 for base access Mar 12 01:39:54.078620 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 12 01:39:54.078627 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 01:39:54.078634 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 01:39:54.078640 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 01:39:54.078646 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 01:39:54.078656 kernel: ACPI: Added _OSI(Module Device) Mar 12 01:39:54.078662 kernel: ACPI: Added _OSI(Processor Device) Mar 12 01:39:54.078668 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 01:39:54.078675 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 01:39:54.078681 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 12 01:39:54.078688 kernel: ACPI: Interpreter enabled Mar 12 01:39:54.078694 kernel: ACPI: PM: (supports S0 S3 S5) Mar 12 01:39:54.078700 kernel: ACPI: Using IOAPIC for interrupt routing Mar 12 01:39:54.078707 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 12 01:39:54.078715 kernel: PCI: Using E820 reservations for host bridge windows Mar 12 01:39:54.078762 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 12 01:39:54.078769 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 12 01:39:54.078950 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 01:39:54.079136 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 12 01:39:54.079280 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 12 01:39:54.079290 kernel: PCI host bridge to bus 0000:00 Mar 12 01:39:54.079413 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 12 01:39:54.079531 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 12 01:39:54.079687 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 12 01:39:54.079852 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 12 01:39:54.080028 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 12 01:39:54.080171 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 12 01:39:54.080283 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 12 01:39:54.080509 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 12 01:39:54.080813 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 12 01:39:54.080996 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 12 01:39:54.081184 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 12 01:39:54.081361 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 12 01:39:54.081539 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 12 01:39:54.081881 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 12 01:39:54.082127 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 12 01:39:54.082339 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 12 01:39:54.082534 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 12 01:39:54.082855 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 12 01:39:54.083057 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 12 01:39:54.083250 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 12 01:39:54.083438 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 12 01:39:54.083708 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 12 01:39:54.083972 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 12 01:39:54.084174 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 12 01:39:54.084364 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 12 01:39:54.084552 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 12 01:39:54.084872 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 12 01:39:54.085071 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 12 01:39:54.085261 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 12 01:39:54.085442 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 12 01:39:54.085673 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 12 01:39:54.085955 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 12 01:39:54.086149 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 12 01:39:54.086167 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 12 01:39:54.086186 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 12 01:39:54.086197 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 12 01:39:54.086208 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 12 01:39:54.086218 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 12 01:39:54.086229 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 12 01:39:54.086240 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 12 01:39:54.086251 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 12 01:39:54.086261 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 12 01:39:54.086272 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 12 01:39:54.086320 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 12 01:39:54.086332 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 12 01:39:54.086343 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 12 01:39:54.086353 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 12 01:39:54.086364 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 12 01:39:54.086374 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 12 01:39:54.086385 kernel: iommu: Default domain type: Translated Mar 12 01:39:54.086396 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 12 01:39:54.086407 kernel: PCI: Using ACPI for IRQ routing Mar 12 01:39:54.086422 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 12 01:39:54.086432 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 12 01:39:54.086443 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 12 01:39:54.086660 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 12 01:39:54.086895 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 12 01:39:54.087075 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 12 01:39:54.087093 kernel: vgaarb: loaded Mar 12 01:39:54.087105 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 12 01:39:54.087122 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 12 01:39:54.087133 kernel: clocksource: Switched to clocksource kvm-clock Mar 12 01:39:54.087144 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 01:39:54.087155 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 01:39:54.087166 kernel: pnp: PnP ACPI init Mar 12 01:39:54.087351 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 12 01:39:54.087368 kernel: pnp: PnP ACPI: found 6 devices Mar 12 01:39:54.087381 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 12 01:39:54.087397 kernel: NET: Registered PF_INET protocol family Mar 12 01:39:54.087408 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 01:39:54.087419 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 01:39:54.087430 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 01:39:54.087441 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 01:39:54.087452 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 01:39:54.087464 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 01:39:54.087475 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:39:54.087486 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:39:54.087500 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 01:39:54.087511 kernel: NET: Registered PF_XDP protocol family Mar 12 01:39:54.087841 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 12 01:39:54.088036 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 12 01:39:54.088200 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 12 01:39:54.088365 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 12 01:39:54.088531 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 12 01:39:54.088816 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 12 01:39:54.088837 kernel: PCI: CLS 0 bytes, default 64 Mar 12 01:39:54.088855 kernel: Initialise system trusted keyrings Mar 12 01:39:54.088867 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 01:39:54.088878 kernel: Key type asymmetric registered Mar 12 01:39:54.088890 kernel: Asymmetric key parser 'x509' registered Mar 12 01:39:54.088901 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 12 01:39:54.088913 kernel: io scheduler mq-deadline registered Mar 12 01:39:54.088925 kernel: io scheduler kyber registered Mar 12 01:39:54.088936 kernel: io scheduler bfq registered Mar 12 01:39:54.088948 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 12 01:39:54.088965 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 12 01:39:54.088978 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 12 01:39:54.088991 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 12 01:39:54.089004 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 01:39:54.089015 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 12 01:39:54.089029 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 12 01:39:54.089040 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 12 01:39:54.089051 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 12 01:39:54.089246 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 12 01:39:54.089271 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 12 01:39:54.089446 kernel: rtc_cmos 00:04: registered as rtc0 Mar 12 01:39:54.089673 kernel: rtc_cmos 00:04: setting system clock to 2026-03-12T01:39:53 UTC (1773279593) Mar 12 01:39:54.089921 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 12 01:39:54.089941 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 12 01:39:54.089953 kernel: NET: Registered PF_INET6 protocol family Mar 12 01:39:54.089964 kernel: Segment Routing with IPv6 Mar 12 01:39:54.089976 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 01:39:54.089994 kernel: NET: Registered PF_PACKET protocol family Mar 12 01:39:54.090007 kernel: Key type dns_resolver registered Mar 12 01:39:54.090019 kernel: IPI shorthand broadcast: enabled Mar 12 01:39:54.090032 kernel: sched_clock: Marking stable (1029021753, 307418229)->(1636195383, -299755401) Mar 12 01:39:54.090043 kernel: registered taskstats version 1 Mar 12 01:39:54.090055 kernel: Loading compiled-in X.509 certificates Mar 12 01:39:54.090067 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 67287262975845098ef9f337a0e8baa9afd38510' Mar 12 01:39:54.090078 kernel: Key type .fscrypt registered Mar 12 01:39:54.090089 kernel: Key type fscrypt-provisioning registered Mar 12 01:39:54.090106 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 01:39:54.090118 kernel: ima: Allocated hash algorithm: sha1 Mar 12 01:39:54.090129 kernel: ima: No architecture policies found Mar 12 01:39:54.090141 kernel: clk: Disabling unused clocks Mar 12 01:39:54.090153 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 12 01:39:54.090164 kernel: Write protecting the kernel read-only data: 36864k Mar 12 01:39:54.090175 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 12 01:39:54.090187 kernel: Run /init as init process Mar 12 01:39:54.090202 kernel: with arguments: Mar 12 01:39:54.090214 kernel: /init Mar 12 01:39:54.090226 kernel: with environment: Mar 12 01:39:54.090237 kernel: HOME=/ Mar 12 01:39:54.090248 kernel: TERM=linux Mar 12 01:39:54.090262 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:39:54.090276 systemd[1]: Detected virtualization kvm. Mar 12 01:39:54.090288 systemd[1]: Detected architecture x86-64. Mar 12 01:39:54.090305 systemd[1]: Running in initrd. Mar 12 01:39:54.090316 systemd[1]: No hostname configured, using default hostname. Mar 12 01:39:54.090328 systemd[1]: Hostname set to . Mar 12 01:39:54.090341 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:39:54.090353 systemd[1]: Queued start job for default target initrd.target. Mar 12 01:39:54.090366 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:39:54.090378 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:39:54.090391 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 01:39:54.090407 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:39:54.090419 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 01:39:54.090432 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 01:39:54.090447 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 01:39:54.090460 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 01:39:54.090472 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:39:54.090485 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:39:54.090501 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:39:54.090513 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:39:54.090526 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:39:54.090596 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:39:54.090619 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:39:54.090631 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:39:54.090647 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 01:39:54.090661 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 01:39:54.090674 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:39:54.090686 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:39:54.090699 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:39:54.090711 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:39:54.090784 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 01:39:54.090799 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:39:54.090812 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 01:39:54.090830 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 01:39:54.090842 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:39:54.090882 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:39:54.090897 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:39:54.090964 systemd-journald[193]: Collecting audit messages is disabled. Mar 12 01:39:54.091030 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 01:39:54.091044 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:39:54.091056 systemd-journald[193]: Journal started Mar 12 01:39:54.091081 systemd-journald[193]: Runtime Journal (/run/log/journal/796946323f904ad2984757eabfbb61d7) is 6.0M, max 48.4M, 42.3M free. Mar 12 01:39:54.072203 systemd-modules-load[194]: Inserted module 'overlay' Mar 12 01:39:54.094817 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 01:39:54.103825 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:39:54.119843 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 01:39:54.122245 systemd-modules-load[194]: Inserted module 'br_netfilter' Mar 12 01:39:54.268930 kernel: Bridge firewalling registered Mar 12 01:39:54.124055 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:39:54.290136 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:39:54.290800 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:39:54.292023 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:39:54.295367 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:39:54.299957 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:39:54.300652 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:39:54.306134 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:39:54.330441 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:39:54.340329 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:39:54.352465 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:39:54.376087 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 01:39:54.381833 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:39:54.390595 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:39:54.406416 dracut-cmdline[225]: dracut-dracut-053 Mar 12 01:39:54.411694 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:39:54.431019 systemd-resolved[229]: Positive Trust Anchors: Mar 12 01:39:54.431032 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:39:54.431059 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:39:54.433522 systemd-resolved[229]: Defaulting to hostname 'linux'. Mar 12 01:39:54.434816 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:39:54.439966 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:39:54.505856 kernel: SCSI subsystem initialized Mar 12 01:39:54.516826 kernel: Loading iSCSI transport class v2.0-870. Mar 12 01:39:54.529835 kernel: iscsi: registered transport (tcp) Mar 12 01:39:54.553363 kernel: iscsi: registered transport (qla4xxx) Mar 12 01:39:54.553468 kernel: QLogic iSCSI HBA Driver Mar 12 01:39:54.604474 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 01:39:54.618999 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 01:39:54.648777 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 01:39:54.648857 kernel: device-mapper: uevent: version 1.0.3 Mar 12 01:39:54.651836 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 12 01:39:54.703807 kernel: raid6: avx2x4 gen() 30048 MB/s Mar 12 01:39:54.721820 kernel: raid6: avx2x2 gen() 25110 MB/s Mar 12 01:39:54.741101 kernel: raid6: avx2x1 gen() 27199 MB/s Mar 12 01:39:54.741137 kernel: raid6: using algorithm avx2x4 gen() 30048 MB/s Mar 12 01:39:54.761121 kernel: raid6: .... xor() 4995 MB/s, rmw enabled Mar 12 01:39:54.761188 kernel: raid6: using avx2x2 recovery algorithm Mar 12 01:39:54.787809 kernel: xor: automatically using best checksumming function avx Mar 12 01:39:54.958793 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 01:39:54.972272 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:39:54.988912 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:39:55.004636 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 12 01:39:55.009365 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:39:55.027927 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 01:39:55.041218 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Mar 12 01:39:55.078969 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:39:55.101925 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:39:55.170051 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:39:55.185454 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 01:39:55.198037 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 01:39:55.206276 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:39:55.206438 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:39:55.214550 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:39:55.231957 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 01:39:55.256859 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:39:55.268048 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 12 01:39:55.280781 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 12 01:39:55.298766 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 01:39:55.298830 kernel: GPT:9289727 != 19775487 Mar 12 01:39:55.298863 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 01:39:55.298874 kernel: GPT:9289727 != 19775487 Mar 12 01:39:55.298900 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 01:39:55.298913 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:39:55.298212 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:39:55.298343 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:39:55.312713 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:39:55.321209 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:39:55.321431 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:39:55.329051 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:39:55.345713 kernel: libata version 3.00 loaded. Mar 12 01:39:55.345793 kernel: cryptd: max_cpu_qlen set to 1000 Mar 12 01:39:55.349229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:39:55.359163 kernel: ahci 0000:00:1f.2: version 3.0 Mar 12 01:39:55.359698 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 12 01:39:55.361845 kernel: AVX2 version of gcm_enc/dec engaged. Mar 12 01:39:55.366015 kernel: AES CTR mode by8 optimization enabled Mar 12 01:39:55.366043 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 12 01:39:55.369782 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 12 01:39:55.379175 kernel: scsi host0: ahci Mar 12 01:39:55.382784 kernel: scsi host1: ahci Mar 12 01:39:55.386808 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Mar 12 01:39:55.387763 kernel: scsi host2: ahci Mar 12 01:39:55.389924 kernel: scsi host3: ahci Mar 12 01:39:55.391026 kernel: scsi host4: ahci Mar 12 01:39:55.393690 kernel: BTRFS: device fsid 94537345-7f6b-4b2a-965f-248bd6f0b7eb devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (484) Mar 12 01:39:55.399494 kernel: scsi host5: ahci Mar 12 01:39:55.399791 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 12 01:39:55.399804 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 12 01:39:55.399814 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 12 01:39:55.399823 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 12 01:39:55.399837 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 12 01:39:55.399845 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 12 01:39:55.401891 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 12 01:39:55.553324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:39:55.567448 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 12 01:39:55.581901 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:39:55.592431 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 12 01:39:55.596176 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 12 01:39:55.618915 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 01:39:55.626838 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:39:55.637365 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:39:55.637390 disk-uuid[555]: Primary Header is updated. Mar 12 01:39:55.637390 disk-uuid[555]: Secondary Entries is updated. Mar 12 01:39:55.637390 disk-uuid[555]: Secondary Header is updated. Mar 12 01:39:55.648789 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:39:55.661303 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:39:55.711954 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 12 01:39:55.711995 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 12 01:39:55.719858 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 12 01:39:55.719886 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 12 01:39:55.723775 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 12 01:39:55.726874 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 12 01:39:55.726897 kernel: ata3.00: applying bridge limits Mar 12 01:39:55.731782 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 12 01:39:55.731806 kernel: ata3.00: configured for UDMA/100 Mar 12 01:39:55.736826 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 12 01:39:55.803416 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 12 01:39:55.803824 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 01:39:55.817820 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 12 01:39:56.644785 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:39:56.644880 disk-uuid[557]: The operation has completed successfully. Mar 12 01:39:56.676026 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 01:39:56.676220 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 01:39:56.723004 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 01:39:56.730522 sh[593]: Success Mar 12 01:39:56.747810 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 12 01:39:56.789207 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 01:39:56.806197 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 01:39:56.810539 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 01:39:56.832286 kernel: BTRFS info (device dm-0): first mount of filesystem 94537345-7f6b-4b2a-965f-248bd6f0b7eb Mar 12 01:39:56.832322 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:39:56.832339 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 12 01:39:56.835547 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 12 01:39:56.837922 kernel: BTRFS info (device dm-0): using free space tree Mar 12 01:39:56.849018 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 01:39:56.849954 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 01:39:56.866989 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 01:39:56.870858 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 01:39:56.890872 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:39:56.890904 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:39:56.890921 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:39:56.899813 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:39:56.912511 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 12 01:39:56.920151 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:39:56.926981 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 01:39:56.939993 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 01:39:57.007171 ignition[689]: Ignition 2.19.0 Mar 12 01:39:57.007217 ignition[689]: Stage: fetch-offline Mar 12 01:39:57.007272 ignition[689]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:39:57.007290 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:39:57.007408 ignition[689]: parsed url from cmdline: "" Mar 12 01:39:57.007415 ignition[689]: no config URL provided Mar 12 01:39:57.007424 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 01:39:57.007441 ignition[689]: no config at "/usr/lib/ignition/user.ign" Mar 12 01:39:57.007483 ignition[689]: op(1): [started] loading QEMU firmware config module Mar 12 01:39:57.007492 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 12 01:39:57.020024 ignition[689]: op(1): [finished] loading QEMU firmware config module Mar 12 01:39:57.048539 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:39:57.068915 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:39:57.095612 systemd-networkd[781]: lo: Link UP Mar 12 01:39:57.095637 systemd-networkd[781]: lo: Gained carrier Mar 12 01:39:57.097298 systemd-networkd[781]: Enumeration completed Mar 12 01:39:57.097823 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:39:57.098217 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:39:57.098221 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:39:57.101297 systemd-networkd[781]: eth0: Link UP Mar 12 01:39:57.101303 systemd-networkd[781]: eth0: Gained carrier Mar 12 01:39:57.101312 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:39:57.103887 systemd[1]: Reached target network.target - Network. Mar 12 01:39:57.149803 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:39:57.279497 ignition[689]: parsing config with SHA512: 14224dcec87e1d319b4531e3e7d0b09ab66594333be833bb34f9ed1190024451b90d026dd6b28cdf7a5cc8a358c360b48f9199a5c90c6e6e3eadb60633b1e879 Mar 12 01:39:57.283045 unknown[689]: fetched base config from "system" Mar 12 01:39:57.283260 unknown[689]: fetched user config from "qemu" Mar 12 01:39:57.283629 ignition[689]: fetch-offline: fetch-offline passed Mar 12 01:39:57.285629 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:39:57.283695 ignition[689]: Ignition finished successfully Mar 12 01:39:57.291781 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 12 01:39:57.310439 systemd-resolved[229]: Detected conflict on linux IN A 10.0.0.151 Mar 12 01:39:57.310485 systemd-resolved[229]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Mar 12 01:39:57.311027 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 01:39:57.325258 ignition[786]: Ignition 2.19.0 Mar 12 01:39:57.325270 ignition[786]: Stage: kargs Mar 12 01:39:57.325479 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:39:57.330665 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 01:39:57.325498 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:39:57.326713 ignition[786]: kargs: kargs passed Mar 12 01:39:57.326849 ignition[786]: Ignition finished successfully Mar 12 01:39:57.345683 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 01:39:57.381447 ignition[795]: Ignition 2.19.0 Mar 12 01:39:57.381483 ignition[795]: Stage: disks Mar 12 01:39:57.381844 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:39:57.381864 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:39:57.394538 ignition[795]: disks: disks passed Mar 12 01:39:57.394670 ignition[795]: Ignition finished successfully Mar 12 01:39:57.401702 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 01:39:57.402567 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 01:39:57.409637 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 01:39:57.418332 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:39:57.426987 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:39:57.434552 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:39:57.454988 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 01:39:57.474364 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 12 01:39:57.480002 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 01:39:57.505035 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 01:39:57.636837 kernel: EXT4-fs (vda9): mounted filesystem f90926b1-4cc2-4a2d-8c45-4ec584c98779 r/w with ordered data mode. Quota mode: none. Mar 12 01:39:57.637983 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 01:39:57.638703 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 01:39:57.660913 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:39:57.666853 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 01:39:57.675612 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Mar 12 01:39:57.667279 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 12 01:39:57.697147 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:39:57.697180 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:39:57.697198 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:39:57.697213 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:39:57.667322 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 01:39:57.667347 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:39:57.699247 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:39:57.735023 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 01:39:57.740126 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 01:39:57.794999 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 01:39:57.803857 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Mar 12 01:39:57.812499 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 01:39:57.819858 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 01:39:57.950333 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 01:39:57.966162 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 01:39:57.971095 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 01:39:57.981904 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:39:57.976052 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 01:39:58.001683 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 01:39:58.022678 ignition[925]: INFO : Ignition 2.19.0 Mar 12 01:39:58.022678 ignition[925]: INFO : Stage: mount Mar 12 01:39:58.028263 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:39:58.028263 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:39:58.037713 ignition[925]: INFO : mount: mount passed Mar 12 01:39:58.040653 ignition[925]: INFO : Ignition finished successfully Mar 12 01:39:58.046487 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 01:39:58.064992 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 01:39:58.074032 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:39:58.094859 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Mar 12 01:39:58.100871 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:39:58.100923 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:39:58.100941 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:39:58.108815 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:39:58.112283 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:39:58.139920 ignition[955]: INFO : Ignition 2.19.0 Mar 12 01:39:58.139920 ignition[955]: INFO : Stage: files Mar 12 01:39:58.146872 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:39:58.146872 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:39:58.146872 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Mar 12 01:39:58.146872 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 01:39:58.146872 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 01:39:58.146872 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 01:39:58.146872 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 01:39:58.178208 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 01:39:58.178208 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:39:58.178208 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 12 01:39:58.147352 unknown[955]: wrote ssh authorized keys file for user: core Mar 12 01:39:58.203858 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 01:39:58.340292 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:39:58.340292 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 12 01:39:58.352182 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 01:39:58.352182 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:39:58.352182 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:39:58.352182 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:39:58.352182 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:39:58.352182 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:39:58.352182 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:39:58.352182 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:39:58.352182 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:39:58.352182 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:39:58.352182 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:39:58.352182 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:39:58.352182 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 12 01:39:58.454184 systemd-networkd[781]: eth0: Gained IPv6LL Mar 12 01:39:58.659693 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 12 01:39:59.251364 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:39:59.251364 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 12 01:39:59.262018 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:39:59.262018 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:39:59.262018 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 12 01:39:59.262018 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 12 01:39:59.262018 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:39:59.262018 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:39:59.262018 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 12 01:39:59.262018 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 12 01:39:59.302641 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:39:59.302641 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:39:59.302641 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 12 01:39:59.302641 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 12 01:39:59.302641 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 01:39:59.302641 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:39:59.302641 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:39:59.302641 ignition[955]: INFO : files: files passed Mar 12 01:39:59.302641 ignition[955]: INFO : Ignition finished successfully Mar 12 01:39:59.288234 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 01:39:59.319966 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 01:39:59.327323 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 01:39:59.334939 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 01:39:59.369046 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Mar 12 01:39:59.335068 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 01:39:59.376374 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:39:59.376374 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:39:59.347321 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:39:59.392813 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:39:59.353209 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 01:39:59.373907 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 01:39:59.402057 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 01:39:59.402193 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 01:39:59.408106 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 01:39:59.408242 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 01:39:59.409308 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 01:39:59.410258 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 01:39:59.428322 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:39:59.445957 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 01:39:59.458881 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:39:59.465392 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:39:59.471978 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 01:39:59.477168 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 01:39:59.479907 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:39:59.486921 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 01:39:59.492800 systemd[1]: Stopped target basic.target - Basic System. Mar 12 01:39:59.497970 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 01:39:59.504152 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:39:59.510684 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 01:39:59.517036 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 01:39:59.522872 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:39:59.529852 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 01:39:59.535800 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 01:39:59.541668 systemd[1]: Stopped target swap.target - Swaps. Mar 12 01:39:59.546434 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 01:39:59.549130 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:39:59.555299 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:39:59.561366 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:39:59.568146 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 01:39:59.570844 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:39:59.578271 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 01:39:59.581040 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 01:39:59.587214 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 01:39:59.590647 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:39:59.597498 systemd[1]: Stopped target paths.target - Path Units. Mar 12 01:39:59.602609 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 01:39:59.607842 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:39:59.615702 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 01:39:59.621458 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 01:39:59.627084 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 01:39:59.629662 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:39:59.635275 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 01:39:59.637801 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:39:59.643645 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 01:39:59.646874 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:39:59.654031 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 01:39:59.656714 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 01:39:59.672181 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 01:39:59.677760 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 01:39:59.680503 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:39:59.688297 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 01:39:59.693559 ignition[1011]: INFO : Ignition 2.19.0 Mar 12 01:39:59.693559 ignition[1011]: INFO : Stage: umount Mar 12 01:39:59.693559 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:39:59.693559 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:39:59.693559 ignition[1011]: INFO : umount: umount passed Mar 12 01:39:59.693559 ignition[1011]: INFO : Ignition finished successfully Mar 12 01:39:59.693613 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 01:39:59.693844 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:39:59.718827 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 01:39:59.721913 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:39:59.730919 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 01:39:59.734187 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 01:39:59.736961 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 01:39:59.744460 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 01:39:59.747141 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 01:39:59.754617 systemd[1]: Stopped target network.target - Network. Mar 12 01:39:59.759638 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 01:39:59.762254 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 01:39:59.768030 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 01:39:59.768108 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 01:39:59.776351 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 01:39:59.776407 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 01:39:59.779461 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 01:39:59.784631 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 01:39:59.793451 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 01:39:59.799415 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 01:39:59.806446 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 01:39:59.809288 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 01:39:59.809870 systemd-networkd[781]: eth0: DHCPv6 lease lost Mar 12 01:39:59.822513 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 01:39:59.825327 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 01:39:59.832505 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 01:39:59.835327 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 01:39:59.843248 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 01:39:59.843327 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:39:59.852108 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 01:39:59.852185 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 01:39:59.872924 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 01:39:59.875908 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 01:39:59.875991 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:39:59.883008 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 01:39:59.883060 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:39:59.889606 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 01:39:59.889658 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 01:39:59.893317 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 01:39:59.893365 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:39:59.899451 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:39:59.929868 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 01:39:59.930023 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 01:39:59.933042 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 01:39:59.933202 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:39:59.939205 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 01:39:59.939274 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 01:39:59.944194 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 01:39:59.944238 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:39:59.950658 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 01:39:59.950713 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:39:59.965040 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 01:39:59.965107 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 01:39:59.973522 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:39:59.973634 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:40:00.000955 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 01:40:00.005815 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 01:40:00.005888 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:40:00.011234 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:40:00.011285 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:40:00.012526 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 01:40:00.012688 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 01:40:00.012872 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 01:40:00.014114 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 01:40:00.027424 systemd[1]: Switching root. Mar 12 01:40:00.074321 systemd-journald[193]: Journal stopped Mar 12 01:40:01.322695 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Mar 12 01:40:01.322803 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 01:40:01.322822 kernel: SELinux: policy capability open_perms=1 Mar 12 01:40:01.322840 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 01:40:01.322851 kernel: SELinux: policy capability always_check_network=0 Mar 12 01:40:01.322861 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 01:40:01.322874 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 01:40:01.322884 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 01:40:01.322894 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 01:40:01.322904 kernel: audit: type=1403 audit(1773279600.252:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 01:40:01.322915 systemd[1]: Successfully loaded SELinux policy in 48.361ms. Mar 12 01:40:01.322933 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.386ms. Mar 12 01:40:01.322944 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:40:01.322955 systemd[1]: Detected virtualization kvm. Mar 12 01:40:01.322966 systemd[1]: Detected architecture x86-64. Mar 12 01:40:01.322979 systemd[1]: Detected first boot. Mar 12 01:40:01.322990 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:40:01.323002 zram_generator::config[1053]: No configuration found. Mar 12 01:40:01.323017 systemd[1]: Populated /etc with preset unit settings. Mar 12 01:40:01.323028 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 01:40:01.323038 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 01:40:01.323049 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 01:40:01.323062 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 01:40:01.323075 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 01:40:01.323086 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 01:40:01.323097 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 01:40:01.323107 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 01:40:01.323118 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 01:40:01.323129 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 01:40:01.323140 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 01:40:01.323150 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:40:01.323161 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:40:01.323175 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 01:40:01.323185 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 01:40:01.323196 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 01:40:01.323207 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:40:01.323218 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 12 01:40:01.323228 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:40:01.323240 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 01:40:01.323251 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 01:40:01.323264 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 01:40:01.323274 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 01:40:01.323285 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:40:01.323296 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:40:01.323306 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:40:01.323317 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:40:01.323328 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 01:40:01.323338 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 01:40:01.323351 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:40:01.323362 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:40:01.323372 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:40:01.323383 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 01:40:01.323393 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 01:40:01.323404 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 01:40:01.323415 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 01:40:01.323425 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:40:01.323436 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 01:40:01.323449 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 01:40:01.323459 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 01:40:01.323471 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 01:40:01.323482 systemd[1]: Reached target machines.target - Containers. Mar 12 01:40:01.323492 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 01:40:01.323503 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:40:01.323514 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:40:01.323524 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 01:40:01.323535 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:40:01.323548 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:40:01.323558 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:40:01.323569 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 01:40:01.323612 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:40:01.323624 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 01:40:01.323635 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 01:40:01.323646 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 01:40:01.323657 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 01:40:01.323670 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 01:40:01.323681 kernel: ACPI: bus type drm_connector registered Mar 12 01:40:01.323691 kernel: loop: module loaded Mar 12 01:40:01.323701 kernel: fuse: init (API version 7.39) Mar 12 01:40:01.323714 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:40:01.323853 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:40:01.323868 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 01:40:01.323899 systemd-journald[1138]: Collecting audit messages is disabled. Mar 12 01:40:01.323923 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 01:40:01.323935 systemd-journald[1138]: Journal started Mar 12 01:40:01.323953 systemd-journald[1138]: Runtime Journal (/run/log/journal/796946323f904ad2984757eabfbb61d7) is 6.0M, max 48.4M, 42.3M free. Mar 12 01:40:00.840369 systemd[1]: Queued start job for default target multi-user.target. Mar 12 01:40:00.867848 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 12 01:40:00.868494 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 01:40:00.868956 systemd[1]: systemd-journald.service: Consumed 1.340s CPU time. Mar 12 01:40:01.338361 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:40:01.342923 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 01:40:01.342952 systemd[1]: Stopped verity-setup.service. Mar 12 01:40:01.350796 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:40:01.357637 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:40:01.358898 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 01:40:01.362183 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 01:40:01.365695 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 01:40:01.368904 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 01:40:01.372325 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 01:40:01.375892 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 01:40:01.379119 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 01:40:01.383043 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:40:01.387249 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 01:40:01.387463 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 01:40:01.391387 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:40:01.391627 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:40:01.395330 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:40:01.395542 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:40:01.399108 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:40:01.399314 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:40:01.403376 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 01:40:01.403614 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 01:40:01.407342 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:40:01.407542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:40:01.411189 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:40:01.415109 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 01:40:01.419280 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 01:40:01.435573 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 01:40:01.448003 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 01:40:01.454245 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 01:40:01.457530 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 01:40:01.457627 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:40:01.461665 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 12 01:40:01.466573 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 01:40:01.471279 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 01:40:01.474316 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:40:01.476180 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 01:40:01.481463 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 01:40:01.486244 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:40:01.490044 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 01:40:01.493199 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:40:01.497190 systemd-journald[1138]: Time spent on flushing to /var/log/journal/796946323f904ad2984757eabfbb61d7 is 18.794ms for 940 entries. Mar 12 01:40:01.497190 systemd-journald[1138]: System Journal (/var/log/journal/796946323f904ad2984757eabfbb61d7) is 8.0M, max 195.6M, 187.6M free. Mar 12 01:40:01.529231 systemd-journald[1138]: Received client request to flush runtime journal. Mar 12 01:40:01.495832 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:40:01.503246 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 01:40:01.510929 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 01:40:01.516108 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:40:01.520550 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 01:40:01.524873 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 01:40:01.529269 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 01:40:01.533889 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 01:40:01.538350 kernel: loop0: detected capacity change from 0 to 142488 Mar 12 01:40:01.546340 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 01:40:01.552363 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:40:01.568036 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 01:40:01.577788 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 01:40:01.587043 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 12 01:40:01.594478 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 12 01:40:01.600482 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 01:40:01.611003 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:40:01.614803 kernel: loop1: detected capacity change from 0 to 228704 Mar 12 01:40:01.618053 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 01:40:01.618953 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 12 01:40:01.627513 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 12 01:40:01.651978 kernel: loop2: detected capacity change from 0 to 140768 Mar 12 01:40:01.661864 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 12 01:40:01.662246 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 12 01:40:01.669562 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:40:01.692791 kernel: loop3: detected capacity change from 0 to 142488 Mar 12 01:40:01.709797 kernel: loop4: detected capacity change from 0 to 228704 Mar 12 01:40:01.724814 kernel: loop5: detected capacity change from 0 to 140768 Mar 12 01:40:01.742701 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 12 01:40:01.743377 (sd-merge)[1193]: Merged extensions into '/usr'. Mar 12 01:40:01.749209 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 01:40:01.749226 systemd[1]: Reloading... Mar 12 01:40:01.807907 zram_generator::config[1216]: No configuration found. Mar 12 01:40:01.887208 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 01:40:01.937690 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:40:01.980295 systemd[1]: Reloading finished in 230 ms. Mar 12 01:40:02.019669 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 01:40:02.024542 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 01:40:02.028487 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 01:40:02.055037 systemd[1]: Starting ensure-sysext.service... Mar 12 01:40:02.059452 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:40:02.064778 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:40:02.074646 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Mar 12 01:40:02.074864 systemd[1]: Reloading... Mar 12 01:40:02.090427 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 01:40:02.090956 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 01:40:02.091977 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 01:40:02.092235 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 12 01:40:02.092311 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 12 01:40:02.098184 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:40:02.098220 systemd-tmpfiles[1258]: Skipping /boot Mar 12 01:40:02.101214 systemd-udevd[1259]: Using default interface naming scheme 'v255'. Mar 12 01:40:02.116221 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:40:02.116268 systemd-tmpfiles[1258]: Skipping /boot Mar 12 01:40:02.129999 zram_generator::config[1285]: No configuration found. Mar 12 01:40:02.209087 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1299) Mar 12 01:40:02.270775 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 12 01:40:02.270851 kernel: ACPI: button: Power Button [PWRF] Mar 12 01:40:02.290813 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 12 01:40:02.297815 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 12 01:40:02.308935 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 12 01:40:02.315967 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 12 01:40:02.304352 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:40:02.364763 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 01:40:02.382632 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 12 01:40:02.383079 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:40:02.388982 systemd[1]: Reloading finished in 313 ms. Mar 12 01:40:02.443068 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:40:02.453894 kernel: kvm_amd: TSC scaling supported Mar 12 01:40:02.453959 kernel: kvm_amd: Nested Virtualization enabled Mar 12 01:40:02.453981 kernel: kvm_amd: Nested Paging enabled Mar 12 01:40:02.455272 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 12 01:40:02.457837 kernel: kvm_amd: PMU virtualization is disabled Mar 12 01:40:02.494419 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:40:02.505818 kernel: EDAC MC: Ver: 3.0.0 Mar 12 01:40:02.520137 systemd[1]: Finished ensure-sysext.service. Mar 12 01:40:02.536925 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 12 01:40:02.549967 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:40:02.561959 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:40:02.567572 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 01:40:02.571335 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:40:02.572961 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 12 01:40:02.585982 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:40:02.591251 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:40:02.597134 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:40:02.599874 lvm[1360]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:40:02.604990 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:40:02.608169 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:40:02.610095 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 01:40:02.615369 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 01:40:02.620925 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:40:02.626069 augenrules[1381]: No rules Mar 12 01:40:02.628399 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:40:02.635083 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 12 01:40:02.640911 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 01:40:02.646365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:40:02.651149 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:40:02.653531 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:40:02.658445 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 12 01:40:02.663479 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:40:02.663841 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:40:02.667989 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:40:02.668176 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:40:02.672340 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:40:02.672543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:40:02.677456 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:40:02.677769 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:40:02.683000 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 01:40:02.688626 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 01:40:02.693364 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 01:40:02.708986 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:40:02.720997 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 12 01:40:02.726829 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:40:02.879273 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:40:02.879389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:40:02.888990 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 01:40:02.896648 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 01:40:02.901225 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 01:40:02.902488 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 01:40:02.907546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:40:02.911997 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 12 01:40:02.917133 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 01:40:02.957537 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 01:40:03.023079 systemd-networkd[1379]: lo: Link UP Mar 12 01:40:03.023090 systemd-networkd[1379]: lo: Gained carrier Mar 12 01:40:03.024994 systemd-networkd[1379]: Enumeration completed Mar 12 01:40:03.025155 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:40:03.026101 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:40:03.026132 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:40:03.027350 systemd-networkd[1379]: eth0: Link UP Mar 12 01:40:03.027375 systemd-networkd[1379]: eth0: Gained carrier Mar 12 01:40:03.027387 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:40:03.035167 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 01:40:03.040860 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 12 01:40:03.044770 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 01:40:03.045851 systemd-networkd[1379]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:40:03.046535 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Mar 12 01:40:04.352411 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 12 01:40:04.352520 systemd-timesyncd[1387]: Initial clock synchronization to Thu 2026-03-12 01:40:04.352272 UTC. Mar 12 01:40:04.357280 systemd-resolved[1386]: Positive Trust Anchors: Mar 12 01:40:04.357325 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:40:04.357352 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:40:04.361501 systemd-resolved[1386]: Defaulting to hostname 'linux'. Mar 12 01:40:04.363801 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:40:04.367848 systemd[1]: Reached target network.target - Network. Mar 12 01:40:04.371522 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:40:04.376066 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:40:04.379598 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 01:40:04.383888 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 01:40:04.388543 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 01:40:04.392265 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 01:40:04.396311 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 01:40:04.400184 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 01:40:04.400247 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:40:04.403332 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:40:04.407276 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 01:40:04.413391 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 01:40:04.429548 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 01:40:04.433348 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 01:40:04.436767 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:40:04.439783 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:40:04.443436 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:40:04.443516 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:40:04.445466 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 01:40:04.450820 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 01:40:04.455056 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 01:40:04.460195 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 01:40:04.463821 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 01:40:04.467019 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 01:40:04.472252 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 01:40:04.475602 jq[1425]: false Mar 12 01:40:04.478373 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 01:40:04.484900 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 01:40:04.495792 dbus-daemon[1424]: [system] SELinux support is enabled Mar 12 01:40:04.496066 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 01:40:04.500987 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 01:40:04.501469 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 01:40:04.508325 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 01:40:04.509166 extend-filesystems[1426]: Found loop3 Mar 12 01:40:04.516167 extend-filesystems[1426]: Found loop4 Mar 12 01:40:04.516167 extend-filesystems[1426]: Found loop5 Mar 12 01:40:04.516167 extend-filesystems[1426]: Found sr0 Mar 12 01:40:04.516167 extend-filesystems[1426]: Found vda Mar 12 01:40:04.516167 extend-filesystems[1426]: Found vda1 Mar 12 01:40:04.516167 extend-filesystems[1426]: Found vda2 Mar 12 01:40:04.516167 extend-filesystems[1426]: Found vda3 Mar 12 01:40:04.516167 extend-filesystems[1426]: Found usr Mar 12 01:40:04.516167 extend-filesystems[1426]: Found vda4 Mar 12 01:40:04.516167 extend-filesystems[1426]: Found vda6 Mar 12 01:40:04.516167 extend-filesystems[1426]: Found vda7 Mar 12 01:40:04.516167 extend-filesystems[1426]: Found vda9 Mar 12 01:40:04.516167 extend-filesystems[1426]: Checking size of /dev/vda9 Mar 12 01:40:04.537938 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 01:40:04.549657 update_engine[1440]: I20260312 01:40:04.543763 1440 main.cc:92] Flatcar Update Engine starting Mar 12 01:40:04.549657 update_engine[1440]: I20260312 01:40:04.548327 1440 update_check_scheduler.cc:74] Next update check in 8m53s Mar 12 01:40:04.551633 extend-filesystems[1426]: Resized partition /dev/vda9 Mar 12 01:40:04.560835 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1305) Mar 12 01:40:04.562462 extend-filesystems[1447]: resize2fs 1.47.1 (20-May-2024) Mar 12 01:40:04.576183 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 12 01:40:04.578927 systemd-logind[1437]: Watching system buttons on /dev/input/event1 (Power Button) Mar 12 01:40:04.578979 systemd-logind[1437]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 01:40:04.581025 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 01:40:04.584006 systemd-logind[1437]: New seat seat0. Mar 12 01:40:04.586288 jq[1444]: true Mar 12 01:40:04.594319 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 01:40:04.605313 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 01:40:04.605696 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 01:40:04.606367 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 01:40:04.606821 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 01:40:04.613465 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 01:40:04.614008 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 01:40:04.619778 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 12 01:40:04.632908 (ntainerd)[1452]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 01:40:04.647331 systemd[1]: Started update-engine.service - Update Engine. Mar 12 01:40:04.640478 dbus-daemon[1424]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 12 01:40:04.648008 jq[1451]: true Mar 12 01:40:04.651486 sshd_keygen[1441]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 01:40:04.652146 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 01:40:04.652280 extend-filesystems[1447]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 12 01:40:04.652280 extend-filesystems[1447]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 12 01:40:04.652280 extend-filesystems[1447]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 12 01:40:04.657249 extend-filesystems[1426]: Resized filesystem in /dev/vda9 Mar 12 01:40:04.652312 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 01:40:04.669998 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 01:40:04.670116 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 01:40:04.685178 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 01:40:04.694225 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 01:40:04.694516 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 01:40:04.700991 tar[1450]: linux-amd64/LICENSE Mar 12 01:40:04.700991 tar[1450]: linux-amd64/helm Mar 12 01:40:04.701287 bash[1482]: Updated "/home/core/.ssh/authorized_keys" Mar 12 01:40:04.702840 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 01:40:04.708994 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 01:40:04.721012 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 01:40:04.722283 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 01:40:04.725637 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 12 01:40:04.734131 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 01:40:04.734426 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 01:40:04.752166 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 01:40:04.770388 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 01:40:04.785219 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 01:40:04.792377 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 12 01:40:04.796920 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 01:40:04.845472 containerd[1452]: time="2026-03-12T01:40:04.845371051Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 12 01:40:04.871433 containerd[1452]: time="2026-03-12T01:40:04.871305258Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:40:04.875188 containerd[1452]: time="2026-03-12T01:40:04.875089984Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:40:04.875188 containerd[1452]: time="2026-03-12T01:40:04.875158392Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 12 01:40:04.875188 containerd[1452]: time="2026-03-12T01:40:04.875183469Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 12 01:40:04.875480 containerd[1452]: time="2026-03-12T01:40:04.875413398Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 12 01:40:04.875480 containerd[1452]: time="2026-03-12T01:40:04.875469623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 12 01:40:04.875693 containerd[1452]: time="2026-03-12T01:40:04.875617850Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:40:04.875693 containerd[1452]: time="2026-03-12T01:40:04.875670097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:40:04.876098 containerd[1452]: time="2026-03-12T01:40:04.876031832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:40:04.876098 containerd[1452]: time="2026-03-12T01:40:04.876082026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 12 01:40:04.876158 containerd[1452]: time="2026-03-12T01:40:04.876102304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:40:04.876158 containerd[1452]: time="2026-03-12T01:40:04.876128202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 12 01:40:04.876318 containerd[1452]: time="2026-03-12T01:40:04.876256682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:40:04.876817 containerd[1452]: time="2026-03-12T01:40:04.876768978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:40:04.877086 containerd[1452]: time="2026-03-12T01:40:04.877012854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:40:04.877086 containerd[1452]: time="2026-03-12T01:40:04.877061444Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 12 01:40:04.877261 containerd[1452]: time="2026-03-12T01:40:04.877194483Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 12 01:40:04.877342 containerd[1452]: time="2026-03-12T01:40:04.877299419Z" level=info msg="metadata content store policy set" policy=shared Mar 12 01:40:04.884795 containerd[1452]: time="2026-03-12T01:40:04.884613161Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 12 01:40:04.884962 containerd[1452]: time="2026-03-12T01:40:04.884918130Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 12 01:40:04.885152 containerd[1452]: time="2026-03-12T01:40:04.885043825Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 12 01:40:04.885152 containerd[1452]: time="2026-03-12T01:40:04.885114868Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 12 01:40:04.885152 containerd[1452]: time="2026-03-12T01:40:04.885144493Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 12 01:40:04.885457 containerd[1452]: time="2026-03-12T01:40:04.885340228Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 12 01:40:04.888042 containerd[1452]: time="2026-03-12T01:40:04.886468815Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 12 01:40:04.888042 containerd[1452]: time="2026-03-12T01:40:04.886695178Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 12 01:40:04.888042 containerd[1452]: time="2026-03-12T01:40:04.886768645Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 12 01:40:04.888042 containerd[1452]: time="2026-03-12T01:40:04.886785146Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 12 01:40:04.888042 containerd[1452]: time="2026-03-12T01:40:04.886803390Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 12 01:40:04.888042 containerd[1452]: time="2026-03-12T01:40:04.886885483Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 12 01:40:04.888042 containerd[1452]: time="2026-03-12T01:40:04.886914107Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 12 01:40:04.888042 containerd[1452]: time="2026-03-12T01:40:04.886928985Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 12 01:40:04.888042 containerd[1452]: time="2026-03-12T01:40:04.886944994Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 12 01:40:04.888042 containerd[1452]: time="2026-03-12T01:40:04.887089394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 12 01:40:04.888042 containerd[1452]: time="2026-03-12T01:40:04.887107548Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 12 01:40:04.888042 containerd[1452]: time="2026-03-12T01:40:04.887123407Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 12 01:40:04.888042 containerd[1452]: time="2026-03-12T01:40:04.887188179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.888042 containerd[1452]: time="2026-03-12T01:40:04.887205331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889083 containerd[1452]: time="2026-03-12T01:40:04.887222002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889083 containerd[1452]: time="2026-03-12T01:40:04.887277655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889083 containerd[1452]: time="2026-03-12T01:40:04.887445900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889083 containerd[1452]: time="2026-03-12T01:40:04.887488940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889083 containerd[1452]: time="2026-03-12T01:40:04.887516352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889083 containerd[1452]: time="2026-03-12T01:40:04.887544985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889083 containerd[1452]: time="2026-03-12T01:40:04.887622489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889083 containerd[1452]: time="2026-03-12T01:40:04.887875983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889083 containerd[1452]: time="2026-03-12T01:40:04.887904816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889083 containerd[1452]: time="2026-03-12T01:40:04.887931577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889083 containerd[1452]: time="2026-03-12T01:40:04.887957215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889083 containerd[1452]: time="2026-03-12T01:40:04.887990246Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 12 01:40:04.889083 containerd[1452]: time="2026-03-12T01:40:04.888034829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889083 containerd[1452]: time="2026-03-12T01:40:04.888061318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889083 containerd[1452]: time="2026-03-12T01:40:04.888080394Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 12 01:40:04.889464 containerd[1452]: time="2026-03-12T01:40:04.888297550Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 12 01:40:04.889464 containerd[1452]: time="2026-03-12T01:40:04.888327035Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 12 01:40:04.889464 containerd[1452]: time="2026-03-12T01:40:04.888345069Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 12 01:40:04.889464 containerd[1452]: time="2026-03-12T01:40:04.888361539Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 12 01:40:04.889464 containerd[1452]: time="2026-03-12T01:40:04.888375525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889464 containerd[1452]: time="2026-03-12T01:40:04.888393569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 12 01:40:04.889464 containerd[1452]: time="2026-03-12T01:40:04.888409259Z" level=info msg="NRI interface is disabled by configuration." Mar 12 01:40:04.889464 containerd[1452]: time="2026-03-12T01:40:04.888424968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 12 01:40:04.889814 containerd[1452]: time="2026-03-12T01:40:04.888939388Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 12 01:40:04.889814 containerd[1452]: time="2026-03-12T01:40:04.889033884Z" level=info msg="Connect containerd service" Mar 12 01:40:04.889814 containerd[1452]: time="2026-03-12T01:40:04.889090270Z" level=info msg="using legacy CRI server" Mar 12 01:40:04.889814 containerd[1452]: time="2026-03-12T01:40:04.889105388Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 01:40:04.889814 containerd[1452]: time="2026-03-12T01:40:04.889237124Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 12 01:40:04.890400 containerd[1452]: time="2026-03-12T01:40:04.890272316Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 01:40:04.890651 containerd[1452]: time="2026-03-12T01:40:04.890478431Z" level=info msg="Start subscribing containerd event" Mar 12 01:40:04.890651 containerd[1452]: time="2026-03-12T01:40:04.890544104Z" level=info msg="Start recovering state" Mar 12 01:40:04.890775 containerd[1452]: time="2026-03-12T01:40:04.890656744Z" level=info msg="Start event monitor" Mar 12 01:40:04.890775 containerd[1452]: time="2026-03-12T01:40:04.890675880Z" level=info msg="Start snapshots syncer" Mar 12 01:40:04.890775 containerd[1452]: time="2026-03-12T01:40:04.890685388Z" level=info msg="Start cni network conf syncer for default" Mar 12 01:40:04.890775 containerd[1452]: time="2026-03-12T01:40:04.890692591Z" level=info msg="Start streaming server" Mar 12 01:40:04.890937 containerd[1452]: time="2026-03-12T01:40:04.890914856Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 01:40:04.891027 containerd[1452]: time="2026-03-12T01:40:04.890994425Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 01:40:04.891234 containerd[1452]: time="2026-03-12T01:40:04.891179871Z" level=info msg="containerd successfully booted in 0.047634s" Mar 12 01:40:04.891268 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 01:40:05.125438 tar[1450]: linux-amd64/README.md Mar 12 01:40:05.144471 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 01:40:06.350143 systemd-networkd[1379]: eth0: Gained IPv6LL Mar 12 01:40:06.354011 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 01:40:06.358646 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 01:40:06.373067 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 12 01:40:06.377895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:40:06.383033 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 01:40:06.410862 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 12 01:40:06.411165 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 12 01:40:06.415067 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 01:40:06.420888 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 01:40:07.251424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:40:07.256201 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 01:40:07.258444 (kubelet)[1535]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:40:07.264049 systemd[1]: Startup finished in 1.177s (kernel) + 6.517s (initrd) + 5.754s (userspace) = 13.449s. Mar 12 01:40:07.769910 kubelet[1535]: E0312 01:40:07.769795 1535 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:40:07.773539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:40:07.773837 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:40:07.774170 systemd[1]: kubelet.service: Consumed 1.046s CPU time. Mar 12 01:40:09.070550 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 01:40:09.084105 systemd[1]: Started sshd@0-10.0.0.151:22-10.0.0.1:59536.service - OpenSSH per-connection server daemon (10.0.0.1:59536). Mar 12 01:40:09.133142 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 59536 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:40:09.135148 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:40:09.144772 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 01:40:09.159806 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 01:40:09.161765 systemd-logind[1437]: New session 1 of user core. Mar 12 01:40:09.174201 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 01:40:09.187039 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 01:40:09.190411 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 01:40:09.295964 systemd[1552]: Queued start job for default target default.target. Mar 12 01:40:09.305327 systemd[1552]: Created slice app.slice - User Application Slice. Mar 12 01:40:09.305385 systemd[1552]: Reached target paths.target - Paths. Mar 12 01:40:09.305399 systemd[1552]: Reached target timers.target - Timers. Mar 12 01:40:09.307347 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 01:40:09.326105 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 01:40:09.326301 systemd[1552]: Reached target sockets.target - Sockets. Mar 12 01:40:09.326356 systemd[1552]: Reached target basic.target - Basic System. Mar 12 01:40:09.326408 systemd[1552]: Reached target default.target - Main User Target. Mar 12 01:40:09.326455 systemd[1552]: Startup finished in 128ms. Mar 12 01:40:09.326829 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 01:40:09.344931 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 01:40:09.417104 systemd[1]: Started sshd@1-10.0.0.151:22-10.0.0.1:59548.service - OpenSSH per-connection server daemon (10.0.0.1:59548). Mar 12 01:40:09.478359 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 59548 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:40:09.480397 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:40:09.486024 systemd-logind[1437]: New session 2 of user core. Mar 12 01:40:09.495901 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 01:40:09.553357 sshd[1563]: pam_unix(sshd:session): session closed for user core Mar 12 01:40:09.571310 systemd[1]: sshd@1-10.0.0.151:22-10.0.0.1:59548.service: Deactivated successfully. Mar 12 01:40:09.573290 systemd[1]: session-2.scope: Deactivated successfully. Mar 12 01:40:09.575146 systemd-logind[1437]: Session 2 logged out. Waiting for processes to exit. Mar 12 01:40:09.583051 systemd[1]: Started sshd@2-10.0.0.151:22-10.0.0.1:59552.service - OpenSSH per-connection server daemon (10.0.0.1:59552). Mar 12 01:40:09.584780 systemd-logind[1437]: Removed session 2. Mar 12 01:40:09.612214 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 59552 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:40:09.613821 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:40:09.619117 systemd-logind[1437]: New session 3 of user core. Mar 12 01:40:09.630860 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 01:40:09.682843 sshd[1570]: pam_unix(sshd:session): session closed for user core Mar 12 01:40:09.695214 systemd[1]: sshd@2-10.0.0.151:22-10.0.0.1:59552.service: Deactivated successfully. Mar 12 01:40:09.697116 systemd[1]: session-3.scope: Deactivated successfully. Mar 12 01:40:09.698633 systemd-logind[1437]: Session 3 logged out. Waiting for processes to exit. Mar 12 01:40:09.705072 systemd[1]: Started sshd@3-10.0.0.151:22-10.0.0.1:59556.service - OpenSSH per-connection server daemon (10.0.0.1:59556). Mar 12 01:40:09.706199 systemd-logind[1437]: Removed session 3. Mar 12 01:40:09.732413 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 59556 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:40:09.733926 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:40:09.738794 systemd-logind[1437]: New session 4 of user core. Mar 12 01:40:09.748880 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 01:40:09.809438 sshd[1577]: pam_unix(sshd:session): session closed for user core Mar 12 01:40:09.818644 systemd[1]: sshd@3-10.0.0.151:22-10.0.0.1:59556.service: Deactivated successfully. Mar 12 01:40:09.821320 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 01:40:09.823409 systemd-logind[1437]: Session 4 logged out. Waiting for processes to exit. Mar 12 01:40:09.843148 systemd[1]: Started sshd@4-10.0.0.151:22-10.0.0.1:59564.service - OpenSSH per-connection server daemon (10.0.0.1:59564). Mar 12 01:40:09.844349 systemd-logind[1437]: Removed session 4. Mar 12 01:40:09.871205 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 59564 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:40:09.872882 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:40:09.879191 systemd-logind[1437]: New session 5 of user core. Mar 12 01:40:09.892881 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 01:40:09.955888 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 01:40:09.956246 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:40:09.971832 sudo[1587]: pam_unix(sudo:session): session closed for user root Mar 12 01:40:09.973952 sshd[1584]: pam_unix(sshd:session): session closed for user core Mar 12 01:40:09.983469 systemd[1]: sshd@4-10.0.0.151:22-10.0.0.1:59564.service: Deactivated successfully. Mar 12 01:40:09.985312 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 01:40:09.986955 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. Mar 12 01:40:09.998008 systemd[1]: Started sshd@5-10.0.0.151:22-10.0.0.1:59576.service - OpenSSH per-connection server daemon (10.0.0.1:59576). Mar 12 01:40:09.999067 systemd-logind[1437]: Removed session 5. Mar 12 01:40:10.033461 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 59576 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:40:10.035312 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:40:10.041569 systemd-logind[1437]: New session 6 of user core. Mar 12 01:40:10.055003 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 01:40:10.114238 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 01:40:10.114872 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:40:10.120952 sudo[1596]: pam_unix(sudo:session): session closed for user root Mar 12 01:40:10.127939 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 12 01:40:10.128315 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:40:10.151083 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 12 01:40:10.153427 auditctl[1599]: No rules Mar 12 01:40:10.154112 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 01:40:10.154455 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 12 01:40:10.158382 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:40:10.193181 augenrules[1617]: No rules Mar 12 01:40:10.194891 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:40:10.196154 sudo[1595]: pam_unix(sudo:session): session closed for user root Mar 12 01:40:10.198540 sshd[1592]: pam_unix(sshd:session): session closed for user core Mar 12 01:40:10.213558 systemd[1]: sshd@5-10.0.0.151:22-10.0.0.1:59576.service: Deactivated successfully. Mar 12 01:40:10.215842 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 01:40:10.217486 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. Mar 12 01:40:10.219196 systemd[1]: Started sshd@6-10.0.0.151:22-10.0.0.1:38128.service - OpenSSH per-connection server daemon (10.0.0.1:38128). Mar 12 01:40:10.220136 systemd-logind[1437]: Removed session 6. Mar 12 01:40:10.264085 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 38128 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:40:10.265799 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:40:10.270998 systemd-logind[1437]: New session 7 of user core. Mar 12 01:40:10.286976 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 01:40:10.343026 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 01:40:10.343519 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:40:10.640122 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 01:40:10.640275 (dockerd)[1646]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 01:40:10.922023 dockerd[1646]: time="2026-03-12T01:40:10.921883037Z" level=info msg="Starting up" Mar 12 01:40:11.184523 dockerd[1646]: time="2026-03-12T01:40:11.184351542Z" level=info msg="Loading containers: start." Mar 12 01:40:11.349799 kernel: Initializing XFRM netlink socket Mar 12 01:40:11.456563 systemd-networkd[1379]: docker0: Link UP Mar 12 01:40:11.493897 dockerd[1646]: time="2026-03-12T01:40:11.493819712Z" level=info msg="Loading containers: done." Mar 12 01:40:11.513475 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4149961515-merged.mount: Deactivated successfully. Mar 12 01:40:11.515390 dockerd[1646]: time="2026-03-12T01:40:11.515313219Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 01:40:11.515493 dockerd[1646]: time="2026-03-12T01:40:11.515455514Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 12 01:40:11.515697 dockerd[1646]: time="2026-03-12T01:40:11.515643245Z" level=info msg="Daemon has completed initialization" Mar 12 01:40:11.563269 dockerd[1646]: time="2026-03-12T01:40:11.563162063Z" level=info msg="API listen on /run/docker.sock" Mar 12 01:40:11.563441 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 01:40:12.110796 containerd[1452]: time="2026-03-12T01:40:12.110686017Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 12 01:40:12.701356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2018274349.mount: Deactivated successfully. Mar 12 01:40:13.936535 containerd[1452]: time="2026-03-12T01:40:13.936410823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:13.937554 containerd[1452]: time="2026-03-12T01:40:13.937446075Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 12 01:40:13.938839 containerd[1452]: time="2026-03-12T01:40:13.938772693Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:13.942783 containerd[1452]: time="2026-03-12T01:40:13.942657626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:13.944204 containerd[1452]: time="2026-03-12T01:40:13.944146144Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 1.833332459s" Mar 12 01:40:13.944204 containerd[1452]: time="2026-03-12T01:40:13.944199053Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 12 01:40:13.945000 containerd[1452]: time="2026-03-12T01:40:13.944960855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 12 01:40:15.193307 containerd[1452]: time="2026-03-12T01:40:15.193207387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:15.194841 containerd[1452]: time="2026-03-12T01:40:15.194772900Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 12 01:40:15.196427 containerd[1452]: time="2026-03-12T01:40:15.196402202Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:15.201416 containerd[1452]: time="2026-03-12T01:40:15.201289606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:15.203064 containerd[1452]: time="2026-03-12T01:40:15.202985973Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.25797261s" Mar 12 01:40:15.203064 containerd[1452]: time="2026-03-12T01:40:15.203046687Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 12 01:40:15.203977 containerd[1452]: time="2026-03-12T01:40:15.203862840Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 12 01:40:16.488511 containerd[1452]: time="2026-03-12T01:40:16.488340086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:16.489564 containerd[1452]: time="2026-03-12T01:40:16.489509754Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 12 01:40:16.491460 containerd[1452]: time="2026-03-12T01:40:16.491365845Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:16.496683 containerd[1452]: time="2026-03-12T01:40:16.496566123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:16.498474 containerd[1452]: time="2026-03-12T01:40:16.498380741Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.294460393s" Mar 12 01:40:16.498474 containerd[1452]: time="2026-03-12T01:40:16.498446203Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 12 01:40:16.499317 containerd[1452]: time="2026-03-12T01:40:16.499255407Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 12 01:40:17.476564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3803049242.mount: Deactivated successfully. Mar 12 01:40:17.866566 containerd[1452]: time="2026-03-12T01:40:17.866393009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:17.867476 containerd[1452]: time="2026-03-12T01:40:17.867397575Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 12 01:40:17.868937 containerd[1452]: time="2026-03-12T01:40:17.868882271Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:17.871696 containerd[1452]: time="2026-03-12T01:40:17.871647063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:17.872659 containerd[1452]: time="2026-03-12T01:40:17.872547138Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.373227352s" Mar 12 01:40:17.872659 containerd[1452]: time="2026-03-12T01:40:17.872648938Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 12 01:40:17.873408 containerd[1452]: time="2026-03-12T01:40:17.873367790Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 12 01:40:17.901662 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 01:40:17.911947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:40:18.081145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:40:18.088905 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:40:18.161476 kubelet[1880]: E0312 01:40:18.161345 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:40:18.167196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:40:18.167429 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:40:18.498200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3523561331.mount: Deactivated successfully. Mar 12 01:40:19.479169 containerd[1452]: time="2026-03-12T01:40:19.479038436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:19.479892 containerd[1452]: time="2026-03-12T01:40:19.479835167Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 12 01:40:19.481334 containerd[1452]: time="2026-03-12T01:40:19.481200052Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:19.485777 containerd[1452]: time="2026-03-12T01:40:19.485675547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:19.487081 containerd[1452]: time="2026-03-12T01:40:19.487006031Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.613594219s" Mar 12 01:40:19.487081 containerd[1452]: time="2026-03-12T01:40:19.487059141Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 12 01:40:19.487504 containerd[1452]: time="2026-03-12T01:40:19.487470268Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 12 01:40:19.866545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3826618019.mount: Deactivated successfully. Mar 12 01:40:19.876118 containerd[1452]: time="2026-03-12T01:40:19.876026659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:19.877180 containerd[1452]: time="2026-03-12T01:40:19.877135592Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 12 01:40:19.878692 containerd[1452]: time="2026-03-12T01:40:19.878579577Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:19.882024 containerd[1452]: time="2026-03-12T01:40:19.881941680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:19.883590 containerd[1452]: time="2026-03-12T01:40:19.883530428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 395.829349ms" Mar 12 01:40:19.883689 containerd[1452]: time="2026-03-12T01:40:19.883593986Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 12 01:40:19.884272 containerd[1452]: time="2026-03-12T01:40:19.884225624Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 12 01:40:20.314175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount744749572.mount: Deactivated successfully. Mar 12 01:40:21.238721 containerd[1452]: time="2026-03-12T01:40:21.238665520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:21.239684 containerd[1452]: time="2026-03-12T01:40:21.239645003Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 12 01:40:21.241761 containerd[1452]: time="2026-03-12T01:40:21.241572297Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:21.245153 containerd[1452]: time="2026-03-12T01:40:21.245046263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:21.246383 containerd[1452]: time="2026-03-12T01:40:21.246337814Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.362086241s" Mar 12 01:40:21.246436 containerd[1452]: time="2026-03-12T01:40:21.246382958Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 12 01:40:24.526333 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:40:24.540168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:40:24.568102 systemd[1]: Reloading requested from client PID 2038 ('systemctl') (unit session-7.scope)... Mar 12 01:40:24.568146 systemd[1]: Reloading... Mar 12 01:40:24.646764 zram_generator::config[2074]: No configuration found. Mar 12 01:40:24.804777 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:40:24.874644 systemd[1]: Reloading finished in 306 ms. Mar 12 01:40:24.923496 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 12 01:40:24.923650 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 12 01:40:24.923987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:40:24.926839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:40:25.085082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:40:25.104121 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:40:25.151564 kubelet[2126]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:40:25.151564 kubelet[2126]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 01:40:25.151564 kubelet[2126]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:40:25.152307 kubelet[2126]: I0312 01:40:25.152221 2126 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 01:40:25.657815 kubelet[2126]: I0312 01:40:25.657655 2126 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 01:40:25.657815 kubelet[2126]: I0312 01:40:25.657742 2126 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:40:25.657967 kubelet[2126]: I0312 01:40:25.657927 2126 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 01:40:25.688474 kubelet[2126]: E0312 01:40:25.688380 2126 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 01:40:25.691211 kubelet[2126]: I0312 01:40:25.691167 2126 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:40:25.695878 kubelet[2126]: E0312 01:40:25.695783 2126 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:40:25.695878 kubelet[2126]: I0312 01:40:25.695828 2126 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 12 01:40:25.701823 kubelet[2126]: I0312 01:40:25.701763 2126 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 01:40:25.702544 kubelet[2126]: I0312 01:40:25.702463 2126 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:40:25.702787 kubelet[2126]: I0312 01:40:25.702509 2126 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:40:25.702787 kubelet[2126]: I0312 01:40:25.702747 2126 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 01:40:25.702787 kubelet[2126]: I0312 01:40:25.702757 2126 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 01:40:25.702970 kubelet[2126]: I0312 01:40:25.702878 2126 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:40:25.706276 kubelet[2126]: I0312 01:40:25.706204 2126 kubelet.go:480] "Attempting to sync node with API server" Mar 12 01:40:25.706276 kubelet[2126]: I0312 01:40:25.706239 2126 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:40:25.706276 kubelet[2126]: I0312 01:40:25.706263 2126 kubelet.go:386] "Adding apiserver pod source" Mar 12 01:40:25.707769 kubelet[2126]: I0312 01:40:25.707696 2126 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:40:25.712887 kubelet[2126]: E0312 01:40:25.711571 2126 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 01:40:25.712887 kubelet[2126]: I0312 01:40:25.711659 2126 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:40:25.712887 kubelet[2126]: E0312 01:40:25.711819 2126 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 01:40:25.712887 kubelet[2126]: I0312 01:40:25.712595 2126 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:40:25.713549 kubelet[2126]: W0312 01:40:25.713480 2126 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 01:40:25.720353 kubelet[2126]: I0312 01:40:25.720293 2126 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 01:40:25.720571 kubelet[2126]: I0312 01:40:25.720528 2126 server.go:1289] "Started kubelet" Mar 12 01:40:25.721270 kubelet[2126]: I0312 01:40:25.721168 2126 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:40:25.722388 kubelet[2126]: I0312 01:40:25.722163 2126 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 01:40:25.725243 kubelet[2126]: I0312 01:40:25.724102 2126 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:40:25.725243 kubelet[2126]: I0312 01:40:25.724160 2126 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:40:25.726167 kubelet[2126]: I0312 01:40:25.726114 2126 server.go:317] "Adding debug handlers to kubelet server" Mar 12 01:40:25.727389 kubelet[2126]: I0312 01:40:25.727326 2126 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:40:25.728491 kubelet[2126]: E0312 01:40:25.728352 2126 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:40:25.728491 kubelet[2126]: I0312 01:40:25.728436 2126 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 01:40:25.729451 kubelet[2126]: E0312 01:40:25.727975 2126 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.151:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.151:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189bf46ca27f7876 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-12 01:40:25.720330358 +0000 UTC m=+0.610253076,LastTimestamp:2026-03-12 01:40:25.720330358 +0000 UTC m=+0.610253076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 12 01:40:25.730130 kubelet[2126]: I0312 01:40:25.730030 2126 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:40:25.730172 kubelet[2126]: I0312 01:40:25.730147 2126 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:40:25.730781 kubelet[2126]: I0312 01:40:25.730345 2126 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 01:40:25.730781 kubelet[2126]: I0312 01:40:25.730458 2126 reconciler.go:26] "Reconciler: start to sync state" Mar 12 01:40:25.731366 kubelet[2126]: E0312 01:40:25.731297 2126 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 01:40:25.731493 kubelet[2126]: E0312 01:40:25.731419 2126 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:40:25.732140 kubelet[2126]: I0312 01:40:25.732073 2126 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:40:25.732465 kubelet[2126]: E0312 01:40:25.732334 2126 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="200ms" Mar 12 01:40:25.747133 kubelet[2126]: I0312 01:40:25.747045 2126 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 01:40:25.747133 kubelet[2126]: I0312 01:40:25.747131 2126 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 01:40:25.747201 kubelet[2126]: I0312 01:40:25.747147 2126 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:40:25.819366 kubelet[2126]: I0312 01:40:25.819267 2126 policy_none.go:49] "None policy: Start" Mar 12 01:40:25.819503 kubelet[2126]: I0312 01:40:25.819407 2126 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 01:40:25.819503 kubelet[2126]: I0312 01:40:25.819427 2126 state_mem.go:35] "Initializing new in-memory state store" Mar 12 01:40:25.827148 kubelet[2126]: I0312 01:40:25.826747 2126 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 01:40:25.828508 kubelet[2126]: E0312 01:40:25.828462 2126 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:40:25.828859 kubelet[2126]: I0312 01:40:25.828827 2126 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 01:40:25.828859 kubelet[2126]: I0312 01:40:25.828849 2126 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 01:40:25.828940 kubelet[2126]: I0312 01:40:25.828875 2126 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:40:25.828940 kubelet[2126]: I0312 01:40:25.828881 2126 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 01:40:25.828940 kubelet[2126]: E0312 01:40:25.828918 2126 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:40:25.830446 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 01:40:25.831673 kubelet[2126]: E0312 01:40:25.831193 2126 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 01:40:25.843827 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 01:40:25.848379 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 01:40:25.858886 kubelet[2126]: E0312 01:40:25.858818 2126 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:40:25.859556 kubelet[2126]: I0312 01:40:25.859279 2126 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 01:40:25.859556 kubelet[2126]: I0312 01:40:25.859330 2126 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:40:25.860101 kubelet[2126]: I0312 01:40:25.859845 2126 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 01:40:25.860101 kubelet[2126]: E0312 01:40:25.862143 2126 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:40:25.860101 kubelet[2126]: E0312 01:40:25.862182 2126 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 12 01:40:25.934426 kubelet[2126]: E0312 01:40:25.933822 2126 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="400ms" Mar 12 01:40:25.943179 systemd[1]: Created slice kubepods-burstable-podef6b06d17cbfba7ae821fed1c4b3a3fd.slice - libcontainer container kubepods-burstable-podef6b06d17cbfba7ae821fed1c4b3a3fd.slice. Mar 12 01:40:25.961543 kubelet[2126]: I0312 01:40:25.961433 2126 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:40:25.962447 kubelet[2126]: E0312 01:40:25.962039 2126 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:40:25.962447 kubelet[2126]: E0312 01:40:25.962173 2126 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Mar 12 01:40:25.966426 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 12 01:40:25.969396 kubelet[2126]: E0312 01:40:25.969349 2126 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:40:25.972653 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 12 01:40:25.974900 kubelet[2126]: E0312 01:40:25.974833 2126 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:40:26.031305 kubelet[2126]: I0312 01:40:26.031146 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef6b06d17cbfba7ae821fed1c4b3a3fd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef6b06d17cbfba7ae821fed1c4b3a3fd\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:40:26.031305 kubelet[2126]: I0312 01:40:26.031216 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef6b06d17cbfba7ae821fed1c4b3a3fd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef6b06d17cbfba7ae821fed1c4b3a3fd\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:40:26.031305 kubelet[2126]: I0312 01:40:26.031235 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef6b06d17cbfba7ae821fed1c4b3a3fd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ef6b06d17cbfba7ae821fed1c4b3a3fd\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:40:26.131785 kubelet[2126]: I0312 01:40:26.131650 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:40:26.131785 kubelet[2126]: I0312 01:40:26.131689 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:40:26.131980 kubelet[2126]: I0312 01:40:26.131815 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:40:26.131980 kubelet[2126]: I0312 01:40:26.131877 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:40:26.131980 kubelet[2126]: I0312 01:40:26.131900 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:40:26.131980 kubelet[2126]: I0312 01:40:26.131917 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:40:26.164750 kubelet[2126]: I0312 01:40:26.164599 2126 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:40:26.165161 kubelet[2126]: E0312 01:40:26.165018 2126 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Mar 12 01:40:26.263240 kubelet[2126]: E0312 01:40:26.262994 2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:26.264142 containerd[1452]: time="2026-03-12T01:40:26.264106731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ef6b06d17cbfba7ae821fed1c4b3a3fd,Namespace:kube-system,Attempt:0,}" Mar 12 01:40:26.270529 kubelet[2126]: E0312 01:40:26.270431 2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:26.271128 containerd[1452]: time="2026-03-12T01:40:26.271048817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 12 01:40:26.276159 kubelet[2126]: E0312 01:40:26.276048 2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:26.276694 containerd[1452]: time="2026-03-12T01:40:26.276651774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 12 01:40:26.335365 kubelet[2126]: E0312 01:40:26.335255 2126 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="800ms" Mar 12 01:40:26.567478 kubelet[2126]: I0312 01:40:26.567265 2126 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:40:26.567692 kubelet[2126]: E0312 01:40:26.567596 2126 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Mar 12 01:40:26.684496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1672642203.mount: Deactivated successfully. Mar 12 01:40:26.693082 containerd[1452]: time="2026-03-12T01:40:26.692978004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:40:26.698064 containerd[1452]: time="2026-03-12T01:40:26.698014270Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 12 01:40:26.699133 containerd[1452]: time="2026-03-12T01:40:26.699040766Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:40:26.700265 containerd[1452]: time="2026-03-12T01:40:26.700209104Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:40:26.704410 containerd[1452]: time="2026-03-12T01:40:26.704339034Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:40:26.707882 containerd[1452]: time="2026-03-12T01:40:26.707759444Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:40:26.710285 containerd[1452]: time="2026-03-12T01:40:26.710193535Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 446.009719ms" Mar 12 01:40:26.711130 containerd[1452]: time="2026-03-12T01:40:26.710520528Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:40:26.711800 containerd[1452]: time="2026-03-12T01:40:26.711686265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:40:26.716405 containerd[1452]: time="2026-03-12T01:40:26.716040544Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 444.888926ms" Mar 12 01:40:26.721593 containerd[1452]: time="2026-03-12T01:40:26.721492429Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 444.704801ms" Mar 12 01:40:26.822369 containerd[1452]: time="2026-03-12T01:40:26.821813981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:40:26.822369 containerd[1452]: time="2026-03-12T01:40:26.821849327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:40:26.822369 containerd[1452]: time="2026-03-12T01:40:26.821858835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:40:26.822369 containerd[1452]: time="2026-03-12T01:40:26.821928345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:40:26.832203 containerd[1452]: time="2026-03-12T01:40:26.832056930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:40:26.832442 containerd[1452]: time="2026-03-12T01:40:26.832133192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:40:26.833046 containerd[1452]: time="2026-03-12T01:40:26.832919064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:40:26.833386 containerd[1452]: time="2026-03-12T01:40:26.833257084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:40:26.851262 containerd[1452]: time="2026-03-12T01:40:26.850984636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:40:26.851262 containerd[1452]: time="2026-03-12T01:40:26.851036423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:40:26.851262 containerd[1452]: time="2026-03-12T01:40:26.851049747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:40:26.851571 containerd[1452]: time="2026-03-12T01:40:26.851500289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:40:26.853891 systemd[1]: Started cri-containerd-21fad58e1e8f310702d0f300317085a9ab00ff546d9a14d6988af73f222f386c.scope - libcontainer container 21fad58e1e8f310702d0f300317085a9ab00ff546d9a14d6988af73f222f386c. Mar 12 01:40:26.859776 systemd[1]: Started cri-containerd-42ad3aca775af45f690fc1d9f59a5a772490ba83fe53749003829115d387a014.scope - libcontainer container 42ad3aca775af45f690fc1d9f59a5a772490ba83fe53749003829115d387a014. Mar 12 01:40:26.886525 systemd[1]: Started cri-containerd-e52bcfb442efe13f428338410b4f21669fba12bd95e3e0503c294d98d8a82c38.scope - libcontainer container e52bcfb442efe13f428338410b4f21669fba12bd95e3e0503c294d98d8a82c38. Mar 12 01:40:26.908199 kubelet[2126]: E0312 01:40:26.908152 2126 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 01:40:26.928596 containerd[1452]: time="2026-03-12T01:40:26.928567049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"42ad3aca775af45f690fc1d9f59a5a772490ba83fe53749003829115d387a014\"" Mar 12 01:40:26.929925 containerd[1452]: time="2026-03-12T01:40:26.929812804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"21fad58e1e8f310702d0f300317085a9ab00ff546d9a14d6988af73f222f386c\"" Mar 12 01:40:26.932546 kubelet[2126]: E0312 01:40:26.932269 2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:26.932546 kubelet[2126]: E0312 01:40:26.932465 2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:26.939962 containerd[1452]: time="2026-03-12T01:40:26.939915780Z" level=info msg="CreateContainer within sandbox \"42ad3aca775af45f690fc1d9f59a5a772490ba83fe53749003829115d387a014\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 01:40:26.941843 containerd[1452]: time="2026-03-12T01:40:26.941535839Z" level=info msg="CreateContainer within sandbox \"21fad58e1e8f310702d0f300317085a9ab00ff546d9a14d6988af73f222f386c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 01:40:26.943169 kubelet[2126]: E0312 01:40:26.943147 2126 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 01:40:26.948031 containerd[1452]: time="2026-03-12T01:40:26.947986880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ef6b06d17cbfba7ae821fed1c4b3a3fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e52bcfb442efe13f428338410b4f21669fba12bd95e3e0503c294d98d8a82c38\"" Mar 12 01:40:26.948936 kubelet[2126]: E0312 01:40:26.948768 2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:26.953585 containerd[1452]: time="2026-03-12T01:40:26.953516452Z" level=info msg="CreateContainer within sandbox \"e52bcfb442efe13f428338410b4f21669fba12bd95e3e0503c294d98d8a82c38\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 01:40:26.963979 containerd[1452]: time="2026-03-12T01:40:26.963920843Z" level=info msg="CreateContainer within sandbox \"21fad58e1e8f310702d0f300317085a9ab00ff546d9a14d6988af73f222f386c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"24883c00b00a5125dd186d714457c4eae3d3799e411ca70aa4f84a7704903b6e\"" Mar 12 01:40:26.966493 containerd[1452]: time="2026-03-12T01:40:26.965046119Z" level=info msg="StartContainer for \"24883c00b00a5125dd186d714457c4eae3d3799e411ca70aa4f84a7704903b6e\"" Mar 12 01:40:26.969873 containerd[1452]: time="2026-03-12T01:40:26.969803959Z" level=info msg="CreateContainer within sandbox \"42ad3aca775af45f690fc1d9f59a5a772490ba83fe53749003829115d387a014\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"908c4366c7ad719979e957cdab35f21ec5fc838ac61583b9376bc0668612bc46\"" Mar 12 01:40:26.972937 containerd[1452]: time="2026-03-12T01:40:26.971847109Z" level=info msg="StartContainer for \"908c4366c7ad719979e957cdab35f21ec5fc838ac61583b9376bc0668612bc46\"" Mar 12 01:40:26.985134 containerd[1452]: time="2026-03-12T01:40:26.985082806Z" level=info msg="CreateContainer within sandbox \"e52bcfb442efe13f428338410b4f21669fba12bd95e3e0503c294d98d8a82c38\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"464dde6de01c36098827a40237afb893181cf46f791ca8a3d1c31d29942ba6d8\"" Mar 12 01:40:26.985769 containerd[1452]: time="2026-03-12T01:40:26.985527760Z" level=info msg="StartContainer for \"464dde6de01c36098827a40237afb893181cf46f791ca8a3d1c31d29942ba6d8\"" Mar 12 01:40:27.004956 systemd[1]: Started cri-containerd-908c4366c7ad719979e957cdab35f21ec5fc838ac61583b9376bc0668612bc46.scope - libcontainer container 908c4366c7ad719979e957cdab35f21ec5fc838ac61583b9376bc0668612bc46. Mar 12 01:40:27.008168 systemd[1]: Started cri-containerd-24883c00b00a5125dd186d714457c4eae3d3799e411ca70aa4f84a7704903b6e.scope - libcontainer container 24883c00b00a5125dd186d714457c4eae3d3799e411ca70aa4f84a7704903b6e. Mar 12 01:40:27.032904 systemd[1]: Started cri-containerd-464dde6de01c36098827a40237afb893181cf46f791ca8a3d1c31d29942ba6d8.scope - libcontainer container 464dde6de01c36098827a40237afb893181cf46f791ca8a3d1c31d29942ba6d8. Mar 12 01:40:27.056082 containerd[1452]: time="2026-03-12T01:40:27.056001978Z" level=info msg="StartContainer for \"908c4366c7ad719979e957cdab35f21ec5fc838ac61583b9376bc0668612bc46\" returns successfully" Mar 12 01:40:27.078920 containerd[1452]: time="2026-03-12T01:40:27.078807021Z" level=info msg="StartContainer for \"24883c00b00a5125dd186d714457c4eae3d3799e411ca70aa4f84a7704903b6e\" returns successfully" Mar 12 01:40:27.086607 containerd[1452]: time="2026-03-12T01:40:27.086400172Z" level=info msg="StartContainer for \"464dde6de01c36098827a40237afb893181cf46f791ca8a3d1c31d29942ba6d8\" returns successfully" Mar 12 01:40:27.137772 kubelet[2126]: E0312 01:40:27.136381 2126 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="1.6s" Mar 12 01:40:27.369930 kubelet[2126]: I0312 01:40:27.369456 2126 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:40:27.854427 kubelet[2126]: E0312 01:40:27.854369 2126 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:40:27.854546 kubelet[2126]: E0312 01:40:27.854525 2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:27.863835 kubelet[2126]: E0312 01:40:27.863016 2126 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:40:27.863835 kubelet[2126]: E0312 01:40:27.863116 2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:27.866472 kubelet[2126]: E0312 01:40:27.866412 2126 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:40:27.866605 kubelet[2126]: E0312 01:40:27.866553 2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:28.871052 kubelet[2126]: E0312 01:40:28.871026 2126 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:40:28.872039 kubelet[2126]: E0312 01:40:28.871589 2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:28.872571 kubelet[2126]: E0312 01:40:28.872418 2126 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:40:28.872571 kubelet[2126]: E0312 01:40:28.872495 2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:28.940686 kubelet[2126]: E0312 01:40:28.940600 2126 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 12 01:40:29.021193 kubelet[2126]: I0312 01:40:29.021106 2126 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 12 01:40:29.021193 kubelet[2126]: E0312 01:40:29.021185 2126 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 12 01:40:29.031823 kubelet[2126]: I0312 01:40:29.031755 2126 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:40:29.044018 kubelet[2126]: E0312 01:40:29.043939 2126 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 12 01:40:29.044018 kubelet[2126]: I0312 01:40:29.043986 2126 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:40:29.045607 kubelet[2126]: E0312 01:40:29.045503 2126 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:40:29.045607 kubelet[2126]: I0312 01:40:29.045552 2126 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:40:29.047093 kubelet[2126]: E0312 01:40:29.047049 2126 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 12 01:40:29.714081 kubelet[2126]: I0312 01:40:29.714019 2126 apiserver.go:52] "Watching apiserver" Mar 12 01:40:29.731137 kubelet[2126]: I0312 01:40:29.731064 2126 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 01:40:30.431246 kubelet[2126]: I0312 01:40:30.431110 2126 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:40:30.440134 kubelet[2126]: E0312 01:40:30.440011 2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:30.872853 kubelet[2126]: E0312 01:40:30.872550 2126 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:31.040023 systemd[1]: Reloading requested from client PID 2422 ('systemctl') (unit session-7.scope)... Mar 12 01:40:31.040072 systemd[1]: Reloading... Mar 12 01:40:31.122824 zram_generator::config[2464]: No configuration found. Mar 12 01:40:31.223959 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:40:31.305762 systemd[1]: Reloading finished in 265 ms. Mar 12 01:40:31.355147 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:40:31.355661 kubelet[2126]: I0312 01:40:31.355541 2126 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:40:31.376073 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 01:40:31.376431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:40:31.376523 systemd[1]: kubelet.service: Consumed 1.269s CPU time, 133.6M memory peak, 0B memory swap peak. Mar 12 01:40:31.387056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:40:31.545876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:40:31.562365 (kubelet)[2506]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:40:31.618014 kubelet[2506]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:40:31.618014 kubelet[2506]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 01:40:31.618014 kubelet[2506]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:40:31.618364 kubelet[2506]: I0312 01:40:31.618028 2506 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 01:40:31.625546 kubelet[2506]: I0312 01:40:31.625465 2506 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 01:40:31.625546 kubelet[2506]: I0312 01:40:31.625509 2506 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:40:31.625985 kubelet[2506]: I0312 01:40:31.625884 2506 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 01:40:31.627415 kubelet[2506]: I0312 01:40:31.627240 2506 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 01:40:31.629874 kubelet[2506]: I0312 01:40:31.629794 2506 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:40:31.635555 kubelet[2506]: E0312 01:40:31.635505 2506 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:40:31.635601 kubelet[2506]: I0312 01:40:31.635560 2506 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 12 01:40:31.643375 kubelet[2506]: I0312 01:40:31.643304 2506 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 01:40:31.643704 kubelet[2506]: I0312 01:40:31.643616 2506 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:40:31.643945 kubelet[2506]: I0312 01:40:31.643689 2506 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:40:31.643945 kubelet[2506]: I0312 01:40:31.643938 2506 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 01:40:31.644074 kubelet[2506]: I0312 01:40:31.643948 2506 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 01:40:31.644074 kubelet[2506]: I0312 01:40:31.643997 2506 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:40:31.644259 kubelet[2506]: I0312 01:40:31.644213 2506 kubelet.go:480] "Attempting to sync node with API server" Mar 12 01:40:31.644259 kubelet[2506]: I0312 01:40:31.644242 2506 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:40:31.644315 kubelet[2506]: I0312 01:40:31.644265 2506 kubelet.go:386] "Adding apiserver pod source" Mar 12 01:40:31.644315 kubelet[2506]: I0312 01:40:31.644313 2506 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:40:31.646155 kubelet[2506]: I0312 01:40:31.646062 2506 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:40:31.646850 kubelet[2506]: I0312 01:40:31.646814 2506 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:40:31.650317 kubelet[2506]: I0312 01:40:31.650239 2506 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 01:40:31.650317 kubelet[2506]: I0312 01:40:31.650267 2506 server.go:1289] "Started kubelet" Mar 12 01:40:31.653458 kubelet[2506]: I0312 01:40:31.652923 2506 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:40:31.653458 kubelet[2506]: I0312 01:40:31.652945 2506 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 01:40:31.653458 kubelet[2506]: I0312 01:40:31.653188 2506 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:40:31.653458 kubelet[2506]: I0312 01:40:31.653227 2506 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:40:31.654168 kubelet[2506]: I0312 01:40:31.654044 2506 server.go:317] "Adding debug handlers to kubelet server" Mar 12 01:40:31.659664 kubelet[2506]: I0312 01:40:31.658372 2506 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:40:31.659664 kubelet[2506]: I0312 01:40:31.658404 2506 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 01:40:31.659664 kubelet[2506]: I0312 01:40:31.658696 2506 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 01:40:31.659664 kubelet[2506]: I0312 01:40:31.658893 2506 reconciler.go:26] "Reconciler: start to sync state" Mar 12 01:40:31.663870 kubelet[2506]: I0312 01:40:31.663831 2506 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:40:31.663966 kubelet[2506]: I0312 01:40:31.663928 2506 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:40:31.665495 kubelet[2506]: E0312 01:40:31.665466 2506 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:40:31.667924 kubelet[2506]: I0312 01:40:31.667853 2506 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:40:31.684049 kubelet[2506]: I0312 01:40:31.683959 2506 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 01:40:31.685903 kubelet[2506]: I0312 01:40:31.685693 2506 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 01:40:31.685962 kubelet[2506]: I0312 01:40:31.685931 2506 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 01:40:31.685962 kubelet[2506]: I0312 01:40:31.685950 2506 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:40:31.685962 kubelet[2506]: I0312 01:40:31.685957 2506 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 01:40:31.686037 kubelet[2506]: E0312 01:40:31.685997 2506 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:40:31.704820 kubelet[2506]: I0312 01:40:31.704783 2506 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 01:40:31.704820 kubelet[2506]: I0312 01:40:31.704815 2506 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 01:40:31.704925 kubelet[2506]: I0312 01:40:31.704833 2506 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:40:31.704956 kubelet[2506]: I0312 01:40:31.704942 2506 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 01:40:31.704978 kubelet[2506]: I0312 01:40:31.704951 2506 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 01:40:31.704978 kubelet[2506]: I0312 01:40:31.704965 2506 policy_none.go:49] "None policy: Start" Mar 12 01:40:31.704978 kubelet[2506]: I0312 01:40:31.704974 2506 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 01:40:31.705031 kubelet[2506]: I0312 01:40:31.704983 2506 state_mem.go:35] "Initializing new in-memory state store" Mar 12 01:40:31.705093 kubelet[2506]: I0312 01:40:31.705060 2506 state_mem.go:75] "Updated machine memory state" Mar 12 01:40:31.710189 kubelet[2506]: E0312 01:40:31.710140 2506 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:40:31.710372 kubelet[2506]: I0312 01:40:31.710320 2506 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 01:40:31.710372 kubelet[2506]: I0312 01:40:31.710360 2506 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:40:31.711416 kubelet[2506]: I0312 01:40:31.711390 2506 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 01:40:31.712055 kubelet[2506]: E0312 01:40:31.712021 2506 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:40:31.787121 kubelet[2506]: I0312 01:40:31.786910 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:40:31.787121 kubelet[2506]: I0312 01:40:31.786944 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:40:31.787121 kubelet[2506]: I0312 01:40:31.787015 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:40:31.796882 kubelet[2506]: E0312 01:40:31.796758 2506 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:40:31.819044 kubelet[2506]: I0312 01:40:31.818981 2506 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:40:31.827385 kubelet[2506]: I0312 01:40:31.827344 2506 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 12 01:40:31.827454 kubelet[2506]: I0312 01:40:31.827397 2506 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 12 01:40:31.860197 kubelet[2506]: I0312 01:40:31.860118 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef6b06d17cbfba7ae821fed1c4b3a3fd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef6b06d17cbfba7ae821fed1c4b3a3fd\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:40:31.860197 kubelet[2506]: I0312 01:40:31.860184 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:40:31.860313 kubelet[2506]: I0312 01:40:31.860213 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:40:31.860313 kubelet[2506]: I0312 01:40:31.860239 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:40:31.860313 kubelet[2506]: I0312 01:40:31.860263 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:40:31.860313 kubelet[2506]: I0312 01:40:31.860284 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef6b06d17cbfba7ae821fed1c4b3a3fd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef6b06d17cbfba7ae821fed1c4b3a3fd\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:40:31.860313 kubelet[2506]: I0312 01:40:31.860307 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef6b06d17cbfba7ae821fed1c4b3a3fd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ef6b06d17cbfba7ae821fed1c4b3a3fd\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:40:31.860444 kubelet[2506]: I0312 01:40:31.860329 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:40:31.860444 kubelet[2506]: I0312 01:40:31.860351 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:40:32.097552 kubelet[2506]: E0312 01:40:32.096275 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:32.097552 kubelet[2506]: E0312 01:40:32.096285 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:32.097552 kubelet[2506]: E0312 01:40:32.097064 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:32.645974 kubelet[2506]: I0312 01:40:32.645835 2506 apiserver.go:52] "Watching apiserver" Mar 12 01:40:32.659000 kubelet[2506]: I0312 01:40:32.658977 2506 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 01:40:32.697670 kubelet[2506]: I0312 01:40:32.697538 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:40:32.698526 kubelet[2506]: I0312 01:40:32.697926 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:40:32.700750 kubelet[2506]: E0312 01:40:32.699697 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:32.710675 kubelet[2506]: E0312 01:40:32.710600 2506 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 12 01:40:32.711097 kubelet[2506]: E0312 01:40:32.710874 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:32.711097 kubelet[2506]: E0312 01:40:32.710964 2506 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 12 01:40:32.711097 kubelet[2506]: E0312 01:40:32.711065 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:32.738091 kubelet[2506]: I0312 01:40:32.738043 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.737978633 podStartE2EDuration="1.737978633s" podCreationTimestamp="2026-03-12 01:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:40:32.72892884 +0000 UTC m=+1.160384637" watchObservedRunningTime="2026-03-12 01:40:32.737978633 +0000 UTC m=+1.169434431" Mar 12 01:40:32.746057 kubelet[2506]: I0312 01:40:32.746016 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.746002373 podStartE2EDuration="2.746002373s" podCreationTimestamp="2026-03-12 01:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:40:32.738539523 +0000 UTC m=+1.169995350" watchObservedRunningTime="2026-03-12 01:40:32.746002373 +0000 UTC m=+1.177458171" Mar 12 01:40:32.756874 kubelet[2506]: I0312 01:40:32.756336 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.75631954 podStartE2EDuration="1.75631954s" podCreationTimestamp="2026-03-12 01:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:40:32.746092756 +0000 UTC m=+1.177548554" watchObservedRunningTime="2026-03-12 01:40:32.75631954 +0000 UTC m=+1.187775348" Mar 12 01:40:33.699934 kubelet[2506]: E0312 01:40:33.699836 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:33.699934 kubelet[2506]: E0312 01:40:33.699836 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:33.700995 kubelet[2506]: E0312 01:40:33.700438 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:37.280847 kubelet[2506]: I0312 01:40:37.280598 2506 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 01:40:37.281525 containerd[1452]: time="2026-03-12T01:40:37.281394565Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 01:40:37.282059 kubelet[2506]: I0312 01:40:37.281890 2506 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 01:40:38.335396 systemd[1]: Created slice kubepods-besteffort-podbeb3e76a_0ef8_44a8_b1eb_0eec5da53939.slice - libcontainer container kubepods-besteffort-podbeb3e76a_0ef8_44a8_b1eb_0eec5da53939.slice. Mar 12 01:40:38.407016 kubelet[2506]: I0312 01:40:38.406946 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/beb3e76a-0ef8-44a8-b1eb-0eec5da53939-kube-proxy\") pod \"kube-proxy-qvql2\" (UID: \"beb3e76a-0ef8-44a8-b1eb-0eec5da53939\") " pod="kube-system/kube-proxy-qvql2" Mar 12 01:40:38.407016 kubelet[2506]: I0312 01:40:38.407010 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/beb3e76a-0ef8-44a8-b1eb-0eec5da53939-xtables-lock\") pod \"kube-proxy-qvql2\" (UID: \"beb3e76a-0ef8-44a8-b1eb-0eec5da53939\") " pod="kube-system/kube-proxy-qvql2" Mar 12 01:40:38.407422 kubelet[2506]: I0312 01:40:38.407033 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/beb3e76a-0ef8-44a8-b1eb-0eec5da53939-lib-modules\") pod \"kube-proxy-qvql2\" (UID: \"beb3e76a-0ef8-44a8-b1eb-0eec5da53939\") " pod="kube-system/kube-proxy-qvql2" Mar 12 01:40:38.407422 kubelet[2506]: I0312 01:40:38.407050 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mf2r\" (UniqueName: \"kubernetes.io/projected/beb3e76a-0ef8-44a8-b1eb-0eec5da53939-kube-api-access-4mf2r\") pod \"kube-proxy-qvql2\" (UID: \"beb3e76a-0ef8-44a8-b1eb-0eec5da53939\") " pod="kube-system/kube-proxy-qvql2" Mar 12 01:40:38.494316 systemd[1]: Created slice kubepods-besteffort-pod62593675_b520_4d9b_8b9c_f373c3b3e0a1.slice - libcontainer container kubepods-besteffort-pod62593675_b520_4d9b_8b9c_f373c3b3e0a1.slice. Mar 12 01:40:38.508057 kubelet[2506]: I0312 01:40:38.507973 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/62593675-b520-4d9b-8b9c-f373c3b3e0a1-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-vpnll\" (UID: \"62593675-b520-4d9b-8b9c-f373c3b3e0a1\") " pod="tigera-operator/tigera-operator-6bf85f8dd-vpnll" Mar 12 01:40:38.508218 kubelet[2506]: I0312 01:40:38.508149 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v8kc\" (UniqueName: \"kubernetes.io/projected/62593675-b520-4d9b-8b9c-f373c3b3e0a1-kube-api-access-9v8kc\") pod \"tigera-operator-6bf85f8dd-vpnll\" (UID: \"62593675-b520-4d9b-8b9c-f373c3b3e0a1\") " pod="tigera-operator/tigera-operator-6bf85f8dd-vpnll" Mar 12 01:40:38.647053 kubelet[2506]: E0312 01:40:38.646889 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:38.648104 containerd[1452]: time="2026-03-12T01:40:38.647335161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qvql2,Uid:beb3e76a-0ef8-44a8-b1eb-0eec5da53939,Namespace:kube-system,Attempt:0,}" Mar 12 01:40:38.678182 containerd[1452]: time="2026-03-12T01:40:38.678008625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:40:38.679052 containerd[1452]: time="2026-03-12T01:40:38.678875684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:40:38.679052 containerd[1452]: time="2026-03-12T01:40:38.678909307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:40:38.679052 containerd[1452]: time="2026-03-12T01:40:38.679006206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:40:38.703882 systemd[1]: Started cri-containerd-e4675f8cf16488b4d8b1bbaf88bafec2c05d9091e1b3cda061eefe279922c88d.scope - libcontainer container e4675f8cf16488b4d8b1bbaf88bafec2c05d9091e1b3cda061eefe279922c88d. Mar 12 01:40:38.730281 containerd[1452]: time="2026-03-12T01:40:38.730247720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qvql2,Uid:beb3e76a-0ef8-44a8-b1eb-0eec5da53939,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4675f8cf16488b4d8b1bbaf88bafec2c05d9091e1b3cda061eefe279922c88d\"" Mar 12 01:40:38.731308 kubelet[2506]: E0312 01:40:38.731117 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:38.738131 containerd[1452]: time="2026-03-12T01:40:38.738075305Z" level=info msg="CreateContainer within sandbox \"e4675f8cf16488b4d8b1bbaf88bafec2c05d9091e1b3cda061eefe279922c88d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 01:40:38.756741 containerd[1452]: time="2026-03-12T01:40:38.756671020Z" level=info msg="CreateContainer within sandbox \"e4675f8cf16488b4d8b1bbaf88bafec2c05d9091e1b3cda061eefe279922c88d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5551849a5ce581ed31e4cc71eec448c1ed31e1f7bcb2689b4ec1a832b8e20838\"" Mar 12 01:40:38.757284 containerd[1452]: time="2026-03-12T01:40:38.757222073Z" level=info msg="StartContainer for \"5551849a5ce581ed31e4cc71eec448c1ed31e1f7bcb2689b4ec1a832b8e20838\"" Mar 12 01:40:38.797009 systemd[1]: Started cri-containerd-5551849a5ce581ed31e4cc71eec448c1ed31e1f7bcb2689b4ec1a832b8e20838.scope - libcontainer container 5551849a5ce581ed31e4cc71eec448c1ed31e1f7bcb2689b4ec1a832b8e20838. Mar 12 01:40:38.800185 containerd[1452]: time="2026-03-12T01:40:38.800001598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-vpnll,Uid:62593675-b520-4d9b-8b9c-f373c3b3e0a1,Namespace:tigera-operator,Attempt:0,}" Mar 12 01:40:38.832000 containerd[1452]: time="2026-03-12T01:40:38.831902527Z" level=info msg="StartContainer for \"5551849a5ce581ed31e4cc71eec448c1ed31e1f7bcb2689b4ec1a832b8e20838\" returns successfully" Mar 12 01:40:38.833806 containerd[1452]: time="2026-03-12T01:40:38.832534247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:40:38.833806 containerd[1452]: time="2026-03-12T01:40:38.832575483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:40:38.833806 containerd[1452]: time="2026-03-12T01:40:38.832585000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:40:38.833806 containerd[1452]: time="2026-03-12T01:40:38.832690957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:40:38.854920 systemd[1]: Started cri-containerd-a178b0b43092baeaf0e5d6677dc8be78e06680d150b0140a4c7f28192dd955ef.scope - libcontainer container a178b0b43092baeaf0e5d6677dc8be78e06680d150b0140a4c7f28192dd955ef. Mar 12 01:40:38.898600 containerd[1452]: time="2026-03-12T01:40:38.898399545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-vpnll,Uid:62593675-b520-4d9b-8b9c-f373c3b3e0a1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a178b0b43092baeaf0e5d6677dc8be78e06680d150b0140a4c7f28192dd955ef\"" Mar 12 01:40:38.903808 containerd[1452]: time="2026-03-12T01:40:38.903695863Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 12 01:40:39.714432 kubelet[2506]: E0312 01:40:39.714357 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:39.725188 kubelet[2506]: I0312 01:40:39.724887 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qvql2" podStartSLOduration=1.72487419 podStartE2EDuration="1.72487419s" podCreationTimestamp="2026-03-12 01:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:40:39.724364656 +0000 UTC m=+8.155820464" watchObservedRunningTime="2026-03-12 01:40:39.72487419 +0000 UTC m=+8.156329988" Mar 12 01:40:39.762208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2026008746.mount: Deactivated successfully. Mar 12 01:40:39.811376 kubelet[2506]: E0312 01:40:39.811309 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:40.081072 kubelet[2506]: E0312 01:40:40.080918 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:40.717904 kubelet[2506]: E0312 01:40:40.717779 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:40.722947 kubelet[2506]: E0312 01:40:40.718534 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:41.237952 containerd[1452]: time="2026-03-12T01:40:41.237821444Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:41.238822 containerd[1452]: time="2026-03-12T01:40:41.238667902Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 12 01:40:41.240391 containerd[1452]: time="2026-03-12T01:40:41.240347285Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:41.243420 containerd[1452]: time="2026-03-12T01:40:41.243353740Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:41.244445 containerd[1452]: time="2026-03-12T01:40:41.244387910Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.34056941s" Mar 12 01:40:41.244445 containerd[1452]: time="2026-03-12T01:40:41.244436361Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 12 01:40:41.249630 containerd[1452]: time="2026-03-12T01:40:41.249531712Z" level=info msg="CreateContainer within sandbox \"a178b0b43092baeaf0e5d6677dc8be78e06680d150b0140a4c7f28192dd955ef\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 12 01:40:41.265837 containerd[1452]: time="2026-03-12T01:40:41.265759733Z" level=info msg="CreateContainer within sandbox \"a178b0b43092baeaf0e5d6677dc8be78e06680d150b0140a4c7f28192dd955ef\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"04f67bb1481dcfc6e8a54668fcbe42bd4c4bdffb00fe8da27c09d6ef5a4d08ba\"" Mar 12 01:40:41.267034 containerd[1452]: time="2026-03-12T01:40:41.266908532Z" level=info msg="StartContainer for \"04f67bb1481dcfc6e8a54668fcbe42bd4c4bdffb00fe8da27c09d6ef5a4d08ba\"" Mar 12 01:40:41.315017 systemd[1]: Started cri-containerd-04f67bb1481dcfc6e8a54668fcbe42bd4c4bdffb00fe8da27c09d6ef5a4d08ba.scope - libcontainer container 04f67bb1481dcfc6e8a54668fcbe42bd4c4bdffb00fe8da27c09d6ef5a4d08ba. Mar 12 01:40:41.360213 containerd[1452]: time="2026-03-12T01:40:41.360161175Z" level=info msg="StartContainer for \"04f67bb1481dcfc6e8a54668fcbe42bd4c4bdffb00fe8da27c09d6ef5a4d08ba\" returns successfully" Mar 12 01:40:41.719988 kubelet[2506]: E0312 01:40:41.719942 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:41.731383 kubelet[2506]: I0312 01:40:41.731261 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-vpnll" podStartSLOduration=1.387043545 podStartE2EDuration="3.731248128s" podCreationTimestamp="2026-03-12 01:40:38 +0000 UTC" firstStartedPulling="2026-03-12 01:40:38.901246669 +0000 UTC m=+7.332702467" lastFinishedPulling="2026-03-12 01:40:41.245451253 +0000 UTC m=+9.676907050" observedRunningTime="2026-03-12 01:40:41.731044048 +0000 UTC m=+10.162499845" watchObservedRunningTime="2026-03-12 01:40:41.731248128 +0000 UTC m=+10.162703926" Mar 12 01:40:42.958100 kubelet[2506]: E0312 01:40:42.958023 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:46.933072 sudo[1628]: pam_unix(sudo:session): session closed for user root Mar 12 01:40:46.936905 sshd[1625]: pam_unix(sshd:session): session closed for user core Mar 12 01:40:46.940280 systemd[1]: sshd@6-10.0.0.151:22-10.0.0.1:38128.service: Deactivated successfully. Mar 12 01:40:46.944099 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 01:40:46.944392 systemd[1]: session-7.scope: Consumed 5.967s CPU time, 158.1M memory peak, 0B memory swap peak. Mar 12 01:40:46.947092 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. Mar 12 01:40:46.951002 systemd-logind[1437]: Removed session 7. Mar 12 01:40:49.044680 systemd[1]: Created slice kubepods-besteffort-poda78a8efa_c70e_4a42_ad7e_bc7faf118b36.slice - libcontainer container kubepods-besteffort-poda78a8efa_c70e_4a42_ad7e_bc7faf118b36.slice. Mar 12 01:40:49.053399 systemd[1]: Created slice kubepods-besteffort-pod47a99c3d_b896_4bd1_aef2_be60e1cfaff9.slice - libcontainer container kubepods-besteffort-pod47a99c3d_b896_4bd1_aef2_be60e1cfaff9.slice. Mar 12 01:40:49.082652 kubelet[2506]: I0312 01:40:49.082501 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/47a99c3d-b896-4bd1-aef2-be60e1cfaff9-sys-fs\") pod \"calico-node-6vfkj\" (UID: \"47a99c3d-b896-4bd1-aef2-be60e1cfaff9\") " pod="calico-system/calico-node-6vfkj" Mar 12 01:40:49.083124 kubelet[2506]: I0312 01:40:49.082674 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/47a99c3d-b896-4bd1-aef2-be60e1cfaff9-bpffs\") pod \"calico-node-6vfkj\" (UID: \"47a99c3d-b896-4bd1-aef2-be60e1cfaff9\") " pod="calico-system/calico-node-6vfkj" Mar 12 01:40:49.083124 kubelet[2506]: I0312 01:40:49.082786 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47a99c3d-b896-4bd1-aef2-be60e1cfaff9-lib-modules\") pod \"calico-node-6vfkj\" (UID: \"47a99c3d-b896-4bd1-aef2-be60e1cfaff9\") " pod="calico-system/calico-node-6vfkj" Mar 12 01:40:49.083124 kubelet[2506]: I0312 01:40:49.082812 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/47a99c3d-b896-4bd1-aef2-be60e1cfaff9-policysync\") pod \"calico-node-6vfkj\" (UID: \"47a99c3d-b896-4bd1-aef2-be60e1cfaff9\") " pod="calico-system/calico-node-6vfkj" Mar 12 01:40:49.083124 kubelet[2506]: I0312 01:40:49.082835 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/47a99c3d-b896-4bd1-aef2-be60e1cfaff9-cni-log-dir\") pod \"calico-node-6vfkj\" (UID: \"47a99c3d-b896-4bd1-aef2-be60e1cfaff9\") " pod="calico-system/calico-node-6vfkj" Mar 12 01:40:49.083124 kubelet[2506]: I0312 01:40:49.082858 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/47a99c3d-b896-4bd1-aef2-be60e1cfaff9-cni-net-dir\") pod \"calico-node-6vfkj\" (UID: \"47a99c3d-b896-4bd1-aef2-be60e1cfaff9\") " pod="calico-system/calico-node-6vfkj" Mar 12 01:40:49.083355 kubelet[2506]: I0312 01:40:49.082880 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/47a99c3d-b896-4bd1-aef2-be60e1cfaff9-var-run-calico\") pod \"calico-node-6vfkj\" (UID: \"47a99c3d-b896-4bd1-aef2-be60e1cfaff9\") " pod="calico-system/calico-node-6vfkj" Mar 12 01:40:49.083355 kubelet[2506]: I0312 01:40:49.082906 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/47a99c3d-b896-4bd1-aef2-be60e1cfaff9-nodeproc\") pod \"calico-node-6vfkj\" (UID: \"47a99c3d-b896-4bd1-aef2-be60e1cfaff9\") " pod="calico-system/calico-node-6vfkj" Mar 12 01:40:49.083355 kubelet[2506]: I0312 01:40:49.082926 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a78a8efa-c70e-4a42-ad7e-bc7faf118b36-typha-certs\") pod \"calico-typha-7c578c7cbf-429kd\" (UID: \"a78a8efa-c70e-4a42-ad7e-bc7faf118b36\") " pod="calico-system/calico-typha-7c578c7cbf-429kd" Mar 12 01:40:49.083355 kubelet[2506]: I0312 01:40:49.082951 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/47a99c3d-b896-4bd1-aef2-be60e1cfaff9-cni-bin-dir\") pod \"calico-node-6vfkj\" (UID: \"47a99c3d-b896-4bd1-aef2-be60e1cfaff9\") " pod="calico-system/calico-node-6vfkj" Mar 12 01:40:49.083355 kubelet[2506]: I0312 01:40:49.082971 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6phnb\" (UniqueName: \"kubernetes.io/projected/47a99c3d-b896-4bd1-aef2-be60e1cfaff9-kube-api-access-6phnb\") pod \"calico-node-6vfkj\" (UID: \"47a99c3d-b896-4bd1-aef2-be60e1cfaff9\") " pod="calico-system/calico-node-6vfkj" Mar 12 01:40:49.083464 kubelet[2506]: I0312 01:40:49.082993 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/47a99c3d-b896-4bd1-aef2-be60e1cfaff9-flexvol-driver-host\") pod \"calico-node-6vfkj\" (UID: \"47a99c3d-b896-4bd1-aef2-be60e1cfaff9\") " pod="calico-system/calico-node-6vfkj" Mar 12 01:40:49.083464 kubelet[2506]: I0312 01:40:49.083012 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/47a99c3d-b896-4bd1-aef2-be60e1cfaff9-node-certs\") pod \"calico-node-6vfkj\" (UID: \"47a99c3d-b896-4bd1-aef2-be60e1cfaff9\") " pod="calico-system/calico-node-6vfkj" Mar 12 01:40:49.083464 kubelet[2506]: I0312 01:40:49.083034 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47a99c3d-b896-4bd1-aef2-be60e1cfaff9-xtables-lock\") pod \"calico-node-6vfkj\" (UID: \"47a99c3d-b896-4bd1-aef2-be60e1cfaff9\") " pod="calico-system/calico-node-6vfkj" Mar 12 01:40:49.083464 kubelet[2506]: I0312 01:40:49.083109 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/47a99c3d-b896-4bd1-aef2-be60e1cfaff9-var-lib-calico\") pod \"calico-node-6vfkj\" (UID: \"47a99c3d-b896-4bd1-aef2-be60e1cfaff9\") " pod="calico-system/calico-node-6vfkj" Mar 12 01:40:49.083464 kubelet[2506]: I0312 01:40:49.083133 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a78a8efa-c70e-4a42-ad7e-bc7faf118b36-tigera-ca-bundle\") pod \"calico-typha-7c578c7cbf-429kd\" (UID: \"a78a8efa-c70e-4a42-ad7e-bc7faf118b36\") " pod="calico-system/calico-typha-7c578c7cbf-429kd" Mar 12 01:40:49.083604 kubelet[2506]: I0312 01:40:49.083159 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47a99c3d-b896-4bd1-aef2-be60e1cfaff9-tigera-ca-bundle\") pod \"calico-node-6vfkj\" (UID: \"47a99c3d-b896-4bd1-aef2-be60e1cfaff9\") " pod="calico-system/calico-node-6vfkj" Mar 12 01:40:49.083604 kubelet[2506]: I0312 01:40:49.083182 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgj79\" (UniqueName: \"kubernetes.io/projected/a78a8efa-c70e-4a42-ad7e-bc7faf118b36-kube-api-access-rgj79\") pod \"calico-typha-7c578c7cbf-429kd\" (UID: \"a78a8efa-c70e-4a42-ad7e-bc7faf118b36\") " pod="calico-system/calico-typha-7c578c7cbf-429kd" Mar 12 01:40:49.143401 kubelet[2506]: E0312 01:40:49.143357 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5zx6" podUID="82884de9-9bde-4318-9c0a-85cc3f92ad06" Mar 12 01:40:49.184637 kubelet[2506]: I0312 01:40:49.184498 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/82884de9-9bde-4318-9c0a-85cc3f92ad06-varrun\") pod \"csi-node-driver-p5zx6\" (UID: \"82884de9-9bde-4318-9c0a-85cc3f92ad06\") " pod="calico-system/csi-node-driver-p5zx6" Mar 12 01:40:49.184637 kubelet[2506]: I0312 01:40:49.184614 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/82884de9-9bde-4318-9c0a-85cc3f92ad06-socket-dir\") pod \"csi-node-driver-p5zx6\" (UID: \"82884de9-9bde-4318-9c0a-85cc3f92ad06\") " pod="calico-system/csi-node-driver-p5zx6" Mar 12 01:40:49.184637 kubelet[2506]: I0312 01:40:49.184635 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zjm6\" (UniqueName: \"kubernetes.io/projected/82884de9-9bde-4318-9c0a-85cc3f92ad06-kube-api-access-4zjm6\") pod \"csi-node-driver-p5zx6\" (UID: \"82884de9-9bde-4318-9c0a-85cc3f92ad06\") " pod="calico-system/csi-node-driver-p5zx6" Mar 12 01:40:49.184856 kubelet[2506]: I0312 01:40:49.184678 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82884de9-9bde-4318-9c0a-85cc3f92ad06-kubelet-dir\") pod \"csi-node-driver-p5zx6\" (UID: \"82884de9-9bde-4318-9c0a-85cc3f92ad06\") " pod="calico-system/csi-node-driver-p5zx6" Mar 12 01:40:49.184856 kubelet[2506]: I0312 01:40:49.184692 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/82884de9-9bde-4318-9c0a-85cc3f92ad06-registration-dir\") pod \"csi-node-driver-p5zx6\" (UID: \"82884de9-9bde-4318-9c0a-85cc3f92ad06\") " pod="calico-system/csi-node-driver-p5zx6" Mar 12 01:40:49.189809 kubelet[2506]: E0312 01:40:49.189650 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.190077 kubelet[2506]: W0312 01:40:49.189986 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.190077 kubelet[2506]: E0312 01:40:49.190027 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.202804 kubelet[2506]: E0312 01:40:49.202759 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.202804 kubelet[2506]: W0312 01:40:49.202781 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.202804 kubelet[2506]: E0312 01:40:49.202799 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.205050 kubelet[2506]: E0312 01:40:49.204977 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.205050 kubelet[2506]: W0312 01:40:49.205015 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.205122 kubelet[2506]: E0312 01:40:49.205094 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.205846 kubelet[2506]: E0312 01:40:49.205667 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.205846 kubelet[2506]: W0312 01:40:49.205805 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.205846 kubelet[2506]: E0312 01:40:49.205817 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.208778 kubelet[2506]: E0312 01:40:49.208438 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.208778 kubelet[2506]: W0312 01:40:49.208452 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.208778 kubelet[2506]: E0312 01:40:49.208462 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.286641 kubelet[2506]: E0312 01:40:49.286545 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.286641 kubelet[2506]: W0312 01:40:49.286614 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.286641 kubelet[2506]: E0312 01:40:49.286633 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.287207 kubelet[2506]: E0312 01:40:49.287135 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.287207 kubelet[2506]: W0312 01:40:49.287171 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.287207 kubelet[2506]: E0312 01:40:49.287190 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.287839 kubelet[2506]: E0312 01:40:49.287804 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.287839 kubelet[2506]: W0312 01:40:49.287834 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.287951 kubelet[2506]: E0312 01:40:49.287846 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.288279 kubelet[2506]: E0312 01:40:49.288245 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.288279 kubelet[2506]: W0312 01:40:49.288276 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.288352 kubelet[2506]: E0312 01:40:49.288290 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.288775 kubelet[2506]: E0312 01:40:49.288681 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.288775 kubelet[2506]: W0312 01:40:49.288768 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.288848 kubelet[2506]: E0312 01:40:49.288778 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.289157 kubelet[2506]: E0312 01:40:49.289126 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.289200 kubelet[2506]: W0312 01:40:49.289159 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.289200 kubelet[2506]: E0312 01:40:49.289176 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.289599 kubelet[2506]: E0312 01:40:49.289526 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.289680 kubelet[2506]: W0312 01:40:49.289642 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.289753 kubelet[2506]: E0312 01:40:49.289683 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.290104 kubelet[2506]: E0312 01:40:49.290074 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.290104 kubelet[2506]: W0312 01:40:49.290100 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.290161 kubelet[2506]: E0312 01:40:49.290109 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.290438 kubelet[2506]: E0312 01:40:49.290417 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.290438 kubelet[2506]: W0312 01:40:49.290433 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.290553 kubelet[2506]: E0312 01:40:49.290441 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.290903 kubelet[2506]: E0312 01:40:49.290870 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.290903 kubelet[2506]: W0312 01:40:49.290902 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.290903 kubelet[2506]: E0312 01:40:49.290915 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.291245 kubelet[2506]: E0312 01:40:49.291216 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.291245 kubelet[2506]: W0312 01:40:49.291244 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.291309 kubelet[2506]: E0312 01:40:49.291253 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.291665 kubelet[2506]: E0312 01:40:49.291635 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.291811 kubelet[2506]: W0312 01:40:49.291667 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.291811 kubelet[2506]: E0312 01:40:49.291678 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.292149 kubelet[2506]: E0312 01:40:49.292123 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.292149 kubelet[2506]: W0312 01:40:49.292148 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.292211 kubelet[2506]: E0312 01:40:49.292158 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.292516 kubelet[2506]: E0312 01:40:49.292466 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.292516 kubelet[2506]: W0312 01:40:49.292494 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.292516 kubelet[2506]: E0312 01:40:49.292502 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.293005 kubelet[2506]: E0312 01:40:49.292961 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.293070 kubelet[2506]: W0312 01:40:49.292990 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.293070 kubelet[2506]: E0312 01:40:49.293067 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.293414 kubelet[2506]: E0312 01:40:49.293367 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.293414 kubelet[2506]: W0312 01:40:49.293394 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.293414 kubelet[2506]: E0312 01:40:49.293402 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.293797 kubelet[2506]: E0312 01:40:49.293770 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.293797 kubelet[2506]: W0312 01:40:49.293794 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.293864 kubelet[2506]: E0312 01:40:49.293801 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.294178 kubelet[2506]: E0312 01:40:49.294136 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.294178 kubelet[2506]: W0312 01:40:49.294163 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.294178 kubelet[2506]: E0312 01:40:49.294171 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.294508 kubelet[2506]: E0312 01:40:49.294465 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.294508 kubelet[2506]: W0312 01:40:49.294491 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.294508 kubelet[2506]: E0312 01:40:49.294499 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.295075 kubelet[2506]: E0312 01:40:49.294958 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.295075 kubelet[2506]: W0312 01:40:49.294986 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.295075 kubelet[2506]: E0312 01:40:49.294994 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.295381 kubelet[2506]: E0312 01:40:49.295342 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.295381 kubelet[2506]: W0312 01:40:49.295374 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.295381 kubelet[2506]: E0312 01:40:49.295386 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.296210 kubelet[2506]: E0312 01:40:49.296071 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.296210 kubelet[2506]: W0312 01:40:49.296088 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.296210 kubelet[2506]: E0312 01:40:49.296098 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.296662 kubelet[2506]: E0312 01:40:49.296604 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.296662 kubelet[2506]: W0312 01:40:49.296637 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.296662 kubelet[2506]: E0312 01:40:49.296647 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.297146 kubelet[2506]: E0312 01:40:49.296966 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.297146 kubelet[2506]: W0312 01:40:49.296980 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.297146 kubelet[2506]: E0312 01:40:49.296991 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.298226 kubelet[2506]: E0312 01:40:49.297390 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.298226 kubelet[2506]: W0312 01:40:49.297422 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.298226 kubelet[2506]: E0312 01:40:49.297431 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.308304 kubelet[2506]: E0312 01:40:49.308232 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:40:49.308304 kubelet[2506]: W0312 01:40:49.308266 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:40:49.308304 kubelet[2506]: E0312 01:40:49.308282 2506 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:40:49.352897 kubelet[2506]: E0312 01:40:49.352821 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:49.354041 containerd[1452]: time="2026-03-12T01:40:49.353834132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c578c7cbf-429kd,Uid:a78a8efa-c70e-4a42-ad7e-bc7faf118b36,Namespace:calico-system,Attempt:0,}" Mar 12 01:40:49.358637 containerd[1452]: time="2026-03-12T01:40:49.358378876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6vfkj,Uid:47a99c3d-b896-4bd1-aef2-be60e1cfaff9,Namespace:calico-system,Attempt:0,}" Mar 12 01:40:49.393461 containerd[1452]: time="2026-03-12T01:40:49.393359261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:40:49.394341 containerd[1452]: time="2026-03-12T01:40:49.394108446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:40:49.394923 containerd[1452]: time="2026-03-12T01:40:49.394287448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:40:49.394923 containerd[1452]: time="2026-03-12T01:40:49.394750870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:40:49.398310 containerd[1452]: time="2026-03-12T01:40:49.398006733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:40:49.398310 containerd[1452]: time="2026-03-12T01:40:49.398084056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:40:49.398310 containerd[1452]: time="2026-03-12T01:40:49.398094867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:40:49.398310 containerd[1452]: time="2026-03-12T01:40:49.398226331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:40:49.420918 systemd[1]: Started cri-containerd-aa0205f3df4a4823b2b0491d73d89c0fbd32d7c186babbfe79df0aa9acd7b427.scope - libcontainer container aa0205f3df4a4823b2b0491d73d89c0fbd32d7c186babbfe79df0aa9acd7b427. Mar 12 01:40:49.424356 systemd[1]: Started cri-containerd-49fadbd463a4343178ee925be0c5c0a273f58d659dbb02bfa8b46d6f0dbb1831.scope - libcontainer container 49fadbd463a4343178ee925be0c5c0a273f58d659dbb02bfa8b46d6f0dbb1831. Mar 12 01:40:49.458485 containerd[1452]: time="2026-03-12T01:40:49.458131087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6vfkj,Uid:47a99c3d-b896-4bd1-aef2-be60e1cfaff9,Namespace:calico-system,Attempt:0,} returns sandbox id \"49fadbd463a4343178ee925be0c5c0a273f58d659dbb02bfa8b46d6f0dbb1831\"" Mar 12 01:40:49.460468 containerd[1452]: time="2026-03-12T01:40:49.460204155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 12 01:40:49.476998 containerd[1452]: time="2026-03-12T01:40:49.476400428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c578c7cbf-429kd,Uid:a78a8efa-c70e-4a42-ad7e-bc7faf118b36,Namespace:calico-system,Attempt:0,} returns sandbox id \"aa0205f3df4a4823b2b0491d73d89c0fbd32d7c186babbfe79df0aa9acd7b427\"" Mar 12 01:40:49.478079 kubelet[2506]: E0312 01:40:49.478043 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:49.815424 update_engine[1440]: I20260312 01:40:49.815328 1440 update_attempter.cc:509] Updating boot flags... Mar 12 01:40:49.846799 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3051) Mar 12 01:40:49.879815 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3053) Mar 12 01:40:50.074410 containerd[1452]: time="2026-03-12T01:40:50.074227518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:50.075867 containerd[1452]: time="2026-03-12T01:40:50.075805156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 12 01:40:50.077139 containerd[1452]: time="2026-03-12T01:40:50.077061524Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:50.080260 containerd[1452]: time="2026-03-12T01:40:50.080167484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:50.081791 containerd[1452]: time="2026-03-12T01:40:50.081762801Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 621.3942ms" Mar 12 01:40:50.081917 containerd[1452]: time="2026-03-12T01:40:50.081847990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 12 01:40:50.084669 containerd[1452]: time="2026-03-12T01:40:50.084419799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 12 01:40:50.096739 containerd[1452]: time="2026-03-12T01:40:50.096642575Z" level=info msg="CreateContainer within sandbox \"49fadbd463a4343178ee925be0c5c0a273f58d659dbb02bfa8b46d6f0dbb1831\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 12 01:40:50.116601 containerd[1452]: time="2026-03-12T01:40:50.116482339Z" level=info msg="CreateContainer within sandbox \"49fadbd463a4343178ee925be0c5c0a273f58d659dbb02bfa8b46d6f0dbb1831\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f2d589aa611dc2553098f541625149075f3b5f4def73dfc57fec14789b49b741\"" Mar 12 01:40:50.118257 containerd[1452]: time="2026-03-12T01:40:50.117252757Z" level=info msg="StartContainer for \"f2d589aa611dc2553098f541625149075f3b5f4def73dfc57fec14789b49b741\"" Mar 12 01:40:50.152972 systemd[1]: Started cri-containerd-f2d589aa611dc2553098f541625149075f3b5f4def73dfc57fec14789b49b741.scope - libcontainer container f2d589aa611dc2553098f541625149075f3b5f4def73dfc57fec14789b49b741. Mar 12 01:40:50.191382 containerd[1452]: time="2026-03-12T01:40:50.191305383Z" level=info msg="StartContainer for \"f2d589aa611dc2553098f541625149075f3b5f4def73dfc57fec14789b49b741\" returns successfully" Mar 12 01:40:50.211992 systemd[1]: cri-containerd-f2d589aa611dc2553098f541625149075f3b5f4def73dfc57fec14789b49b741.scope: Deactivated successfully. Mar 12 01:40:50.244470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2d589aa611dc2553098f541625149075f3b5f4def73dfc57fec14789b49b741-rootfs.mount: Deactivated successfully. Mar 12 01:40:50.268191 containerd[1452]: time="2026-03-12T01:40:50.265966938Z" level=info msg="shim disconnected" id=f2d589aa611dc2553098f541625149075f3b5f4def73dfc57fec14789b49b741 namespace=k8s.io Mar 12 01:40:50.268191 containerd[1452]: time="2026-03-12T01:40:50.268134332Z" level=warning msg="cleaning up after shim disconnected" id=f2d589aa611dc2553098f541625149075f3b5f4def73dfc57fec14789b49b741 namespace=k8s.io Mar 12 01:40:50.268191 containerd[1452]: time="2026-03-12T01:40:50.268145804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:40:50.694827 kubelet[2506]: E0312 01:40:50.694677 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5zx6" podUID="82884de9-9bde-4318-9c0a-85cc3f92ad06" Mar 12 01:40:51.331173 containerd[1452]: time="2026-03-12T01:40:51.331090058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:51.333006 containerd[1452]: time="2026-03-12T01:40:51.332966539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 12 01:40:51.334686 containerd[1452]: time="2026-03-12T01:40:51.334301067Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:51.337649 containerd[1452]: time="2026-03-12T01:40:51.337595002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:51.338666 containerd[1452]: time="2026-03-12T01:40:51.338600000Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.254144745s" Mar 12 01:40:51.338795 containerd[1452]: time="2026-03-12T01:40:51.338760039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 12 01:40:51.340023 containerd[1452]: time="2026-03-12T01:40:51.339846995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 12 01:40:51.353131 containerd[1452]: time="2026-03-12T01:40:51.353082701Z" level=info msg="CreateContainer within sandbox \"aa0205f3df4a4823b2b0491d73d89c0fbd32d7c186babbfe79df0aa9acd7b427\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 12 01:40:51.372882 containerd[1452]: time="2026-03-12T01:40:51.372836552Z" level=info msg="CreateContainer within sandbox \"aa0205f3df4a4823b2b0491d73d89c0fbd32d7c186babbfe79df0aa9acd7b427\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9f36cbad5896bfad38826f40d250827ac972ec284ba141e50f8788b68df2a181\"" Mar 12 01:40:51.373804 containerd[1452]: time="2026-03-12T01:40:51.373344805Z" level=info msg="StartContainer for \"9f36cbad5896bfad38826f40d250827ac972ec284ba141e50f8788b68df2a181\"" Mar 12 01:40:51.416944 systemd[1]: Started cri-containerd-9f36cbad5896bfad38826f40d250827ac972ec284ba141e50f8788b68df2a181.scope - libcontainer container 9f36cbad5896bfad38826f40d250827ac972ec284ba141e50f8788b68df2a181. Mar 12 01:40:51.464603 containerd[1452]: time="2026-03-12T01:40:51.464470224Z" level=info msg="StartContainer for \"9f36cbad5896bfad38826f40d250827ac972ec284ba141e50f8788b68df2a181\" returns successfully" Mar 12 01:40:51.766208 kubelet[2506]: E0312 01:40:51.766139 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:51.780030 kubelet[2506]: I0312 01:40:51.779876 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c578c7cbf-429kd" podStartSLOduration=1.9192870530000001 podStartE2EDuration="3.779853671s" podCreationTimestamp="2026-03-12 01:40:48 +0000 UTC" firstStartedPulling="2026-03-12 01:40:49.479061795 +0000 UTC m=+17.910517593" lastFinishedPulling="2026-03-12 01:40:51.339628413 +0000 UTC m=+19.771084211" observedRunningTime="2026-03-12 01:40:51.779023845 +0000 UTC m=+20.210479643" watchObservedRunningTime="2026-03-12 01:40:51.779853671 +0000 UTC m=+20.211309468" Mar 12 01:40:52.686884 kubelet[2506]: E0312 01:40:52.686778 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5zx6" podUID="82884de9-9bde-4318-9c0a-85cc3f92ad06" Mar 12 01:40:52.772040 kubelet[2506]: I0312 01:40:52.771858 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:40:52.772868 kubelet[2506]: E0312 01:40:52.772237 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:40:54.687387 kubelet[2506]: E0312 01:40:54.687232 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5zx6" podUID="82884de9-9bde-4318-9c0a-85cc3f92ad06" Mar 12 01:40:55.801783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360166323.mount: Deactivated successfully. Mar 12 01:40:56.056963 containerd[1452]: time="2026-03-12T01:40:56.056775027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:56.058219 containerd[1452]: time="2026-03-12T01:40:56.057958733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 12 01:40:56.059769 containerd[1452]: time="2026-03-12T01:40:56.059539017Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:56.063198 containerd[1452]: time="2026-03-12T01:40:56.063130038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:56.064353 containerd[1452]: time="2026-03-12T01:40:56.064269349Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.724373451s" Mar 12 01:40:56.064396 containerd[1452]: time="2026-03-12T01:40:56.064348837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 12 01:40:56.069685 containerd[1452]: time="2026-03-12T01:40:56.069537886Z" level=info msg="CreateContainer within sandbox \"49fadbd463a4343178ee925be0c5c0a273f58d659dbb02bfa8b46d6f0dbb1831\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 12 01:40:56.089468 containerd[1452]: time="2026-03-12T01:40:56.089392665Z" level=info msg="CreateContainer within sandbox \"49fadbd463a4343178ee925be0c5c0a273f58d659dbb02bfa8b46d6f0dbb1831\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"de8ddf8d63da4bc06cd0f721dcc66b0503aca0b3392bc95b91a5929202362468\"" Mar 12 01:40:56.090084 containerd[1452]: time="2026-03-12T01:40:56.090017499Z" level=info msg="StartContainer for \"de8ddf8d63da4bc06cd0f721dcc66b0503aca0b3392bc95b91a5929202362468\"" Mar 12 01:40:56.176889 systemd[1]: Started cri-containerd-de8ddf8d63da4bc06cd0f721dcc66b0503aca0b3392bc95b91a5929202362468.scope - libcontainer container de8ddf8d63da4bc06cd0f721dcc66b0503aca0b3392bc95b91a5929202362468. Mar 12 01:40:56.257455 systemd[1]: cri-containerd-de8ddf8d63da4bc06cd0f721dcc66b0503aca0b3392bc95b91a5929202362468.scope: Deactivated successfully. Mar 12 01:40:56.264639 containerd[1452]: time="2026-03-12T01:40:56.264497382Z" level=info msg="StartContainer for \"de8ddf8d63da4bc06cd0f721dcc66b0503aca0b3392bc95b91a5929202362468\" returns successfully" Mar 12 01:40:56.312690 containerd[1452]: time="2026-03-12T01:40:56.312276662Z" level=info msg="shim disconnected" id=de8ddf8d63da4bc06cd0f721dcc66b0503aca0b3392bc95b91a5929202362468 namespace=k8s.io Mar 12 01:40:56.312690 containerd[1452]: time="2026-03-12T01:40:56.312337786Z" level=warning msg="cleaning up after shim disconnected" id=de8ddf8d63da4bc06cd0f721dcc66b0503aca0b3392bc95b91a5929202362468 namespace=k8s.io Mar 12 01:40:56.312690 containerd[1452]: time="2026-03-12T01:40:56.312353797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:40:56.686773 kubelet[2506]: E0312 01:40:56.686524 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5zx6" podUID="82884de9-9bde-4318-9c0a-85cc3f92ad06" Mar 12 01:40:56.786922 containerd[1452]: time="2026-03-12T01:40:56.786800541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 12 01:40:56.801332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de8ddf8d63da4bc06cd0f721dcc66b0503aca0b3392bc95b91a5929202362468-rootfs.mount: Deactivated successfully. Mar 12 01:40:58.687035 kubelet[2506]: E0312 01:40:58.686671 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p5zx6" podUID="82884de9-9bde-4318-9c0a-85cc3f92ad06" Mar 12 01:40:59.498260 containerd[1452]: time="2026-03-12T01:40:59.498191193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:59.499348 containerd[1452]: time="2026-03-12T01:40:59.499279541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 12 01:40:59.500345 containerd[1452]: time="2026-03-12T01:40:59.500290796Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:59.503003 containerd[1452]: time="2026-03-12T01:40:59.502967866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:40:59.503800 containerd[1452]: time="2026-03-12T01:40:59.503693401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.716854958s" Mar 12 01:40:59.503800 containerd[1452]: time="2026-03-12T01:40:59.503777287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 12 01:40:59.508812 containerd[1452]: time="2026-03-12T01:40:59.508692408Z" level=info msg="CreateContainer within sandbox \"49fadbd463a4343178ee925be0c5c0a273f58d659dbb02bfa8b46d6f0dbb1831\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 12 01:40:59.526428 containerd[1452]: time="2026-03-12T01:40:59.526362774Z" level=info msg="CreateContainer within sandbox \"49fadbd463a4343178ee925be0c5c0a273f58d659dbb02bfa8b46d6f0dbb1831\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8d6fcd0b603a6c4bc80022c0e5459ffb852e080aa5d197cb1b7bb101528d887f\"" Mar 12 01:40:59.527096 containerd[1452]: time="2026-03-12T01:40:59.527017140Z" level=info msg="StartContainer for \"8d6fcd0b603a6c4bc80022c0e5459ffb852e080aa5d197cb1b7bb101528d887f\"" Mar 12 01:40:59.565933 systemd[1]: run-containerd-runc-k8s.io-8d6fcd0b603a6c4bc80022c0e5459ffb852e080aa5d197cb1b7bb101528d887f-runc.ShCetA.mount: Deactivated successfully. Mar 12 01:40:59.581970 systemd[1]: Started cri-containerd-8d6fcd0b603a6c4bc80022c0e5459ffb852e080aa5d197cb1b7bb101528d887f.scope - libcontainer container 8d6fcd0b603a6c4bc80022c0e5459ffb852e080aa5d197cb1b7bb101528d887f. Mar 12 01:40:59.667206 containerd[1452]: time="2026-03-12T01:40:59.667132321Z" level=info msg="StartContainer for \"8d6fcd0b603a6c4bc80022c0e5459ffb852e080aa5d197cb1b7bb101528d887f\" returns successfully" Mar 12 01:41:00.226877 systemd[1]: cri-containerd-8d6fcd0b603a6c4bc80022c0e5459ffb852e080aa5d197cb1b7bb101528d887f.scope: Deactivated successfully. Mar 12 01:41:00.258923 containerd[1452]: time="2026-03-12T01:41:00.258593204Z" level=info msg="shim disconnected" id=8d6fcd0b603a6c4bc80022c0e5459ffb852e080aa5d197cb1b7bb101528d887f namespace=k8s.io Mar 12 01:41:00.258923 containerd[1452]: time="2026-03-12T01:41:00.258658216Z" level=warning msg="cleaning up after shim disconnected" id=8d6fcd0b603a6c4bc80022c0e5459ffb852e080aa5d197cb1b7bb101528d887f namespace=k8s.io Mar 12 01:41:00.258923 containerd[1452]: time="2026-03-12T01:41:00.258667924Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:41:00.313629 kubelet[2506]: I0312 01:41:00.313498 2506 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 12 01:41:00.369889 systemd[1]: Created slice kubepods-besteffort-pod12faea7f_d095_4822_97da_4521ffa445d4.slice - libcontainer container kubepods-besteffort-pod12faea7f_d095_4822_97da_4521ffa445d4.slice. Mar 12 01:41:00.375454 kubelet[2506]: I0312 01:41:00.374753 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8n55\" (UniqueName: \"kubernetes.io/projected/12faea7f-d095-4822-97da-4521ffa445d4-kube-api-access-k8n55\") pod \"calico-apiserver-59985c4b9f-ccgf4\" (UID: \"12faea7f-d095-4822-97da-4521ffa445d4\") " pod="calico-system/calico-apiserver-59985c4b9f-ccgf4" Mar 12 01:41:00.375454 kubelet[2506]: I0312 01:41:00.374790 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnhx9\" (UniqueName: \"kubernetes.io/projected/5f897c92-34a7-46c6-8c17-18c4147d9a5f-kube-api-access-gnhx9\") pod \"coredns-674b8bbfcf-srzp7\" (UID: \"5f897c92-34a7-46c6-8c17-18c4147d9a5f\") " pod="kube-system/coredns-674b8bbfcf-srzp7" Mar 12 01:41:00.375454 kubelet[2506]: I0312 01:41:00.374807 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/12faea7f-d095-4822-97da-4521ffa445d4-calico-apiserver-certs\") pod \"calico-apiserver-59985c4b9f-ccgf4\" (UID: \"12faea7f-d095-4822-97da-4521ffa445d4\") " pod="calico-system/calico-apiserver-59985c4b9f-ccgf4" Mar 12 01:41:00.375454 kubelet[2506]: I0312 01:41:00.374821 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc846323-168d-4e27-8031-2f3b65d5b16e-config\") pod \"goldmane-5b85766d88-5mp72\" (UID: \"fc846323-168d-4e27-8031-2f3b65d5b16e\") " pod="calico-system/goldmane-5b85766d88-5mp72" Mar 12 01:41:00.375454 kubelet[2506]: I0312 01:41:00.374836 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mztx4\" (UniqueName: \"kubernetes.io/projected/fc846323-168d-4e27-8031-2f3b65d5b16e-kube-api-access-mztx4\") pod \"goldmane-5b85766d88-5mp72\" (UID: \"fc846323-168d-4e27-8031-2f3b65d5b16e\") " pod="calico-system/goldmane-5b85766d88-5mp72" Mar 12 01:41:00.375905 kubelet[2506]: I0312 01:41:00.374849 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/67144ade-3bd8-4b48-8f66-b2e43284d340-nginx-config\") pod \"whisker-68964b7876-m247j\" (UID: \"67144ade-3bd8-4b48-8f66-b2e43284d340\") " pod="calico-system/whisker-68964b7876-m247j" Mar 12 01:41:00.375905 kubelet[2506]: I0312 01:41:00.374869 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f897c92-34a7-46c6-8c17-18c4147d9a5f-config-volume\") pod \"coredns-674b8bbfcf-srzp7\" (UID: \"5f897c92-34a7-46c6-8c17-18c4147d9a5f\") " pod="kube-system/coredns-674b8bbfcf-srzp7" Mar 12 01:41:00.375905 kubelet[2506]: I0312 01:41:00.374881 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3461470-30f4-4645-8713-5c638c154c77-config-volume\") pod \"coredns-674b8bbfcf-ldd4z\" (UID: \"c3461470-30f4-4645-8713-5c638c154c77\") " pod="kube-system/coredns-674b8bbfcf-ldd4z" Mar 12 01:41:00.375905 kubelet[2506]: I0312 01:41:00.374894 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttk24\" (UniqueName: \"kubernetes.io/projected/c3461470-30f4-4645-8713-5c638c154c77-kube-api-access-ttk24\") pod \"coredns-674b8bbfcf-ldd4z\" (UID: \"c3461470-30f4-4645-8713-5c638c154c77\") " pod="kube-system/coredns-674b8bbfcf-ldd4z" Mar 12 01:41:00.375905 kubelet[2506]: I0312 01:41:00.374906 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc846323-168d-4e27-8031-2f3b65d5b16e-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-5mp72\" (UID: \"fc846323-168d-4e27-8031-2f3b65d5b16e\") " pod="calico-system/goldmane-5b85766d88-5mp72" Mar 12 01:41:00.376111 kubelet[2506]: I0312 01:41:00.374922 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fc846323-168d-4e27-8031-2f3b65d5b16e-goldmane-key-pair\") pod \"goldmane-5b85766d88-5mp72\" (UID: \"fc846323-168d-4e27-8031-2f3b65d5b16e\") " pod="calico-system/goldmane-5b85766d88-5mp72" Mar 12 01:41:00.376111 kubelet[2506]: I0312 01:41:00.374935 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlftd\" (UniqueName: \"kubernetes.io/projected/67144ade-3bd8-4b48-8f66-b2e43284d340-kube-api-access-tlftd\") pod \"whisker-68964b7876-m247j\" (UID: \"67144ade-3bd8-4b48-8f66-b2e43284d340\") " pod="calico-system/whisker-68964b7876-m247j" Mar 12 01:41:00.376111 kubelet[2506]: I0312 01:41:00.374950 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/67144ade-3bd8-4b48-8f66-b2e43284d340-whisker-backend-key-pair\") pod \"whisker-68964b7876-m247j\" (UID: \"67144ade-3bd8-4b48-8f66-b2e43284d340\") " pod="calico-system/whisker-68964b7876-m247j" Mar 12 01:41:00.376111 kubelet[2506]: I0312 01:41:00.374962 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67144ade-3bd8-4b48-8f66-b2e43284d340-whisker-ca-bundle\") pod \"whisker-68964b7876-m247j\" (UID: \"67144ade-3bd8-4b48-8f66-b2e43284d340\") " pod="calico-system/whisker-68964b7876-m247j" Mar 12 01:41:00.382811 systemd[1]: Created slice kubepods-burstable-pod5f897c92_34a7_46c6_8c17_18c4147d9a5f.slice - libcontainer container kubepods-burstable-pod5f897c92_34a7_46c6_8c17_18c4147d9a5f.slice. Mar 12 01:41:00.389061 systemd[1]: Created slice kubepods-besteffort-podfc846323_168d_4e27_8031_2f3b65d5b16e.slice - libcontainer container kubepods-besteffort-podfc846323_168d_4e27_8031_2f3b65d5b16e.slice. Mar 12 01:41:00.395328 systemd[1]: Created slice kubepods-burstable-podc3461470_30f4_4645_8713_5c638c154c77.slice - libcontainer container kubepods-burstable-podc3461470_30f4_4645_8713_5c638c154c77.slice. Mar 12 01:41:00.402345 systemd[1]: Created slice kubepods-besteffort-pod67144ade_3bd8_4b48_8f66_b2e43284d340.slice - libcontainer container kubepods-besteffort-pod67144ade_3bd8_4b48_8f66_b2e43284d340.slice. Mar 12 01:41:00.409425 systemd[1]: Created slice kubepods-besteffort-pod0c627ae3_0e93_40e3_80e0_9ad9ac70f824.slice - libcontainer container kubepods-besteffort-pod0c627ae3_0e93_40e3_80e0_9ad9ac70f824.slice. Mar 12 01:41:00.416624 systemd[1]: Created slice kubepods-besteffort-pode4a33aa0_00da_4a2a_8056_f0a9ced024aa.slice - libcontainer container kubepods-besteffort-pode4a33aa0_00da_4a2a_8056_f0a9ced024aa.slice. Mar 12 01:41:00.476095 kubelet[2506]: I0312 01:41:00.476008 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btqx6\" (UniqueName: \"kubernetes.io/projected/e4a33aa0-00da-4a2a-8056-f0a9ced024aa-kube-api-access-btqx6\") pod \"calico-kube-controllers-549d945fd5-vwvh9\" (UID: \"e4a33aa0-00da-4a2a-8056-f0a9ced024aa\") " pod="calico-system/calico-kube-controllers-549d945fd5-vwvh9" Mar 12 01:41:00.476095 kubelet[2506]: I0312 01:41:00.476058 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0c627ae3-0e93-40e3-80e0-9ad9ac70f824-calico-apiserver-certs\") pod \"calico-apiserver-59985c4b9f-jlw2l\" (UID: \"0c627ae3-0e93-40e3-80e0-9ad9ac70f824\") " pod="calico-system/calico-apiserver-59985c4b9f-jlw2l" Mar 12 01:41:00.476247 kubelet[2506]: I0312 01:41:00.476119 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4a33aa0-00da-4a2a-8056-f0a9ced024aa-tigera-ca-bundle\") pod \"calico-kube-controllers-549d945fd5-vwvh9\" (UID: \"e4a33aa0-00da-4a2a-8056-f0a9ced024aa\") " pod="calico-system/calico-kube-controllers-549d945fd5-vwvh9" Mar 12 01:41:00.476247 kubelet[2506]: I0312 01:41:00.476224 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bntkl\" (UniqueName: \"kubernetes.io/projected/0c627ae3-0e93-40e3-80e0-9ad9ac70f824-kube-api-access-bntkl\") pod \"calico-apiserver-59985c4b9f-jlw2l\" (UID: \"0c627ae3-0e93-40e3-80e0-9ad9ac70f824\") " pod="calico-system/calico-apiserver-59985c4b9f-jlw2l" Mar 12 01:41:00.533128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d6fcd0b603a6c4bc80022c0e5459ffb852e080aa5d197cb1b7bb101528d887f-rootfs.mount: Deactivated successfully. Mar 12 01:41:00.677506 containerd[1452]: time="2026-03-12T01:41:00.677324957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59985c4b9f-ccgf4,Uid:12faea7f-d095-4822-97da-4521ffa445d4,Namespace:calico-system,Attempt:0,}" Mar 12 01:41:00.686207 kubelet[2506]: E0312 01:41:00.686073 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:00.686834 containerd[1452]: time="2026-03-12T01:41:00.686678068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-srzp7,Uid:5f897c92-34a7-46c6-8c17-18c4147d9a5f,Namespace:kube-system,Attempt:0,}" Mar 12 01:41:00.692698 containerd[1452]: time="2026-03-12T01:41:00.692675763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-5mp72,Uid:fc846323-168d-4e27-8031-2f3b65d5b16e,Namespace:calico-system,Attempt:0,}" Mar 12 01:41:00.694042 systemd[1]: Created slice kubepods-besteffort-pod82884de9_9bde_4318_9c0a_85cc3f92ad06.slice - libcontainer container kubepods-besteffort-pod82884de9_9bde_4318_9c0a_85cc3f92ad06.slice. Mar 12 01:41:00.696485 containerd[1452]: time="2026-03-12T01:41:00.696444810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p5zx6,Uid:82884de9-9bde-4318-9c0a-85cc3f92ad06,Namespace:calico-system,Attempt:0,}" Mar 12 01:41:00.698075 kubelet[2506]: E0312 01:41:00.698022 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:00.698377 containerd[1452]: time="2026-03-12T01:41:00.698339407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ldd4z,Uid:c3461470-30f4-4645-8713-5c638c154c77,Namespace:kube-system,Attempt:0,}" Mar 12 01:41:00.707378 containerd[1452]: time="2026-03-12T01:41:00.707241135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68964b7876-m247j,Uid:67144ade-3bd8-4b48-8f66-b2e43284d340,Namespace:calico-system,Attempt:0,}" Mar 12 01:41:00.714476 containerd[1452]: time="2026-03-12T01:41:00.714321540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59985c4b9f-jlw2l,Uid:0c627ae3-0e93-40e3-80e0-9ad9ac70f824,Namespace:calico-system,Attempt:0,}" Mar 12 01:41:00.720132 containerd[1452]: time="2026-03-12T01:41:00.720090855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549d945fd5-vwvh9,Uid:e4a33aa0-00da-4a2a-8056-f0a9ced024aa,Namespace:calico-system,Attempt:0,}" Mar 12 01:41:00.830070 containerd[1452]: time="2026-03-12T01:41:00.829454247Z" level=info msg="CreateContainer within sandbox \"49fadbd463a4343178ee925be0c5c0a273f58d659dbb02bfa8b46d6f0dbb1831\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 12 01:41:00.953401 containerd[1452]: time="2026-03-12T01:41:00.953256500Z" level=info msg="CreateContainer within sandbox \"49fadbd463a4343178ee925be0c5c0a273f58d659dbb02bfa8b46d6f0dbb1831\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a686f363de5165069e8408c43ea3a1309da96025362a74a0d990847e66495014\"" Mar 12 01:41:00.967802 containerd[1452]: time="2026-03-12T01:41:00.967665466Z" level=info msg="StartContainer for \"a686f363de5165069e8408c43ea3a1309da96025362a74a0d990847e66495014\"" Mar 12 01:41:01.052938 systemd[1]: Started cri-containerd-a686f363de5165069e8408c43ea3a1309da96025362a74a0d990847e66495014.scope - libcontainer container a686f363de5165069e8408c43ea3a1309da96025362a74a0d990847e66495014. Mar 12 01:41:01.059952 containerd[1452]: time="2026-03-12T01:41:01.059794108Z" level=error msg="Failed to destroy network for sandbox \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.061677 containerd[1452]: time="2026-03-12T01:41:01.061528501Z" level=error msg="encountered an error cleaning up failed sandbox \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.061677 containerd[1452]: time="2026-03-12T01:41:01.061641131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p5zx6,Uid:82884de9-9bde-4318-9c0a-85cc3f92ad06,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.075100 containerd[1452]: time="2026-03-12T01:41:01.074362092Z" level=error msg="Failed to destroy network for sandbox \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.075100 containerd[1452]: time="2026-03-12T01:41:01.075024747Z" level=error msg="encountered an error cleaning up failed sandbox \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.075266 containerd[1452]: time="2026-03-12T01:41:01.075068368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59985c4b9f-ccgf4,Uid:12faea7f-d095-4822-97da-4521ffa445d4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.086938 kubelet[2506]: E0312 01:41:01.085408 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.086938 kubelet[2506]: E0312 01:41:01.085528 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-59985c4b9f-ccgf4" Mar 12 01:41:01.086938 kubelet[2506]: E0312 01:41:01.085581 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-59985c4b9f-ccgf4" Mar 12 01:41:01.087117 kubelet[2506]: E0312 01:41:01.085625 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59985c4b9f-ccgf4_calico-system(12faea7f-d095-4822-97da-4521ffa445d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59985c4b9f-ccgf4_calico-system(12faea7f-d095-4822-97da-4521ffa445d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-59985c4b9f-ccgf4" podUID="12faea7f-d095-4822-97da-4521ffa445d4" Mar 12 01:41:01.087117 kubelet[2506]: E0312 01:41:01.085947 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.087117 kubelet[2506]: E0312 01:41:01.085971 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p5zx6" Mar 12 01:41:01.089451 kubelet[2506]: E0312 01:41:01.085986 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p5zx6" Mar 12 01:41:01.089451 kubelet[2506]: E0312 01:41:01.086015 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p5zx6_calico-system(82884de9-9bde-4318-9c0a-85cc3f92ad06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p5zx6_calico-system(82884de9-9bde-4318-9c0a-85cc3f92ad06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p5zx6" podUID="82884de9-9bde-4318-9c0a-85cc3f92ad06" Mar 12 01:41:01.100576 containerd[1452]: time="2026-03-12T01:41:01.100504278Z" level=error msg="Failed to destroy network for sandbox \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.101237 containerd[1452]: time="2026-03-12T01:41:01.101156686Z" level=error msg="Failed to destroy network for sandbox \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.105061 containerd[1452]: time="2026-03-12T01:41:01.105031278Z" level=error msg="encountered an error cleaning up failed sandbox \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.106377 containerd[1452]: time="2026-03-12T01:41:01.105677783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-5mp72,Uid:fc846323-168d-4e27-8031-2f3b65d5b16e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.107369 containerd[1452]: time="2026-03-12T01:41:01.106425138Z" level=error msg="encountered an error cleaning up failed sandbox \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.107631 containerd[1452]: time="2026-03-12T01:41:01.107506082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ldd4z,Uid:c3461470-30f4-4645-8713-5c638c154c77,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.108906 kubelet[2506]: E0312 01:41:01.108112 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.108906 kubelet[2506]: E0312 01:41:01.108171 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ldd4z" Mar 12 01:41:01.108906 kubelet[2506]: E0312 01:41:01.108190 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ldd4z" Mar 12 01:41:01.109020 kubelet[2506]: E0312 01:41:01.108233 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ldd4z_kube-system(c3461470-30f4-4645-8713-5c638c154c77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ldd4z_kube-system(c3461470-30f4-4645-8713-5c638c154c77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ldd4z" podUID="c3461470-30f4-4645-8713-5c638c154c77" Mar 12 01:41:01.109020 kubelet[2506]: E0312 01:41:01.108270 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.109020 kubelet[2506]: E0312 01:41:01.108284 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-5mp72" Mar 12 01:41:01.109144 kubelet[2506]: E0312 01:41:01.108296 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-5mp72" Mar 12 01:41:01.109144 kubelet[2506]: E0312 01:41:01.108318 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-5mp72_calico-system(fc846323-168d-4e27-8031-2f3b65d5b16e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-5mp72_calico-system(fc846323-168d-4e27-8031-2f3b65d5b16e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-5mp72" podUID="fc846323-168d-4e27-8031-2f3b65d5b16e" Mar 12 01:41:01.115763 containerd[1452]: time="2026-03-12T01:41:01.115647312Z" level=error msg="Failed to destroy network for sandbox \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.117310 containerd[1452]: time="2026-03-12T01:41:01.117125969Z" level=error msg="encountered an error cleaning up failed sandbox \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.117310 containerd[1452]: time="2026-03-12T01:41:01.117202050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549d945fd5-vwvh9,Uid:e4a33aa0-00da-4a2a-8056-f0a9ced024aa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.117499 kubelet[2506]: E0312 01:41:01.117444 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.117499 kubelet[2506]: E0312 01:41:01.117494 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-549d945fd5-vwvh9" Mar 12 01:41:01.117597 kubelet[2506]: E0312 01:41:01.117513 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-549d945fd5-vwvh9" Mar 12 01:41:01.117627 kubelet[2506]: E0312 01:41:01.117588 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-549d945fd5-vwvh9_calico-system(e4a33aa0-00da-4a2a-8056-f0a9ced024aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-549d945fd5-vwvh9_calico-system(e4a33aa0-00da-4a2a-8056-f0a9ced024aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-549d945fd5-vwvh9" podUID="e4a33aa0-00da-4a2a-8056-f0a9ced024aa" Mar 12 01:41:01.121086 containerd[1452]: time="2026-03-12T01:41:01.120585258Z" level=error msg="Failed to destroy network for sandbox \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.121673 containerd[1452]: time="2026-03-12T01:41:01.121466590Z" level=error msg="encountered an error cleaning up failed sandbox \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.121673 containerd[1452]: time="2026-03-12T01:41:01.121515802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59985c4b9f-jlw2l,Uid:0c627ae3-0e93-40e3-80e0-9ad9ac70f824,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.122134 kubelet[2506]: E0312 01:41:01.121781 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.122134 kubelet[2506]: E0312 01:41:01.121830 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-59985c4b9f-jlw2l" Mar 12 01:41:01.122134 kubelet[2506]: E0312 01:41:01.121848 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-59985c4b9f-jlw2l" Mar 12 01:41:01.122212 kubelet[2506]: E0312 01:41:01.121883 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59985c4b9f-jlw2l_calico-system(0c627ae3-0e93-40e3-80e0-9ad9ac70f824)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59985c4b9f-jlw2l_calico-system(0c627ae3-0e93-40e3-80e0-9ad9ac70f824)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-59985c4b9f-jlw2l" podUID="0c627ae3-0e93-40e3-80e0-9ad9ac70f824" Mar 12 01:41:01.126321 containerd[1452]: time="2026-03-12T01:41:01.126233871Z" level=error msg="Failed to destroy network for sandbox \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.129065 containerd[1452]: time="2026-03-12T01:41:01.128977978Z" level=error msg="encountered an error cleaning up failed sandbox \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.129832 containerd[1452]: time="2026-03-12T01:41:01.129129109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68964b7876-m247j,Uid:67144ade-3bd8-4b48-8f66-b2e43284d340,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.129955 kubelet[2506]: E0312 01:41:01.129887 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.129955 kubelet[2506]: E0312 01:41:01.129924 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68964b7876-m247j" Mar 12 01:41:01.129955 kubelet[2506]: E0312 01:41:01.129944 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68964b7876-m247j" Mar 12 01:41:01.130031 kubelet[2506]: E0312 01:41:01.129979 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68964b7876-m247j_calico-system(67144ade-3bd8-4b48-8f66-b2e43284d340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68964b7876-m247j_calico-system(67144ade-3bd8-4b48-8f66-b2e43284d340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68964b7876-m247j" podUID="67144ade-3bd8-4b48-8f66-b2e43284d340" Mar 12 01:41:01.136029 containerd[1452]: time="2026-03-12T01:41:01.135849323Z" level=info msg="StartContainer for \"a686f363de5165069e8408c43ea3a1309da96025362a74a0d990847e66495014\" returns successfully" Mar 12 01:41:01.139115 containerd[1452]: time="2026-03-12T01:41:01.139074045Z" level=error msg="Failed to destroy network for sandbox \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.140666 containerd[1452]: time="2026-03-12T01:41:01.140249006Z" level=error msg="encountered an error cleaning up failed sandbox \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.140666 containerd[1452]: time="2026-03-12T01:41:01.140345546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-srzp7,Uid:5f897c92-34a7-46c6-8c17-18c4147d9a5f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.140928 kubelet[2506]: E0312 01:41:01.140823 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:41:01.140928 kubelet[2506]: E0312 01:41:01.140887 2506 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-srzp7" Mar 12 01:41:01.140928 kubelet[2506]: E0312 01:41:01.140907 2506 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-srzp7" Mar 12 01:41:01.141156 kubelet[2506]: E0312 01:41:01.141067 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-srzp7_kube-system(5f897c92-34a7-46c6-8c17-18c4147d9a5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-srzp7_kube-system(5f897c92-34a7-46c6-8c17-18c4147d9a5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-srzp7" podUID="5f897c92-34a7-46c6-8c17-18c4147d9a5f" Mar 12 01:41:01.525959 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264-shm.mount: Deactivated successfully. Mar 12 01:41:01.807289 kubelet[2506]: I0312 01:41:01.806931 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Mar 12 01:41:01.812083 kubelet[2506]: I0312 01:41:01.811235 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Mar 12 01:41:01.820326 kubelet[2506]: I0312 01:41:01.820299 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Mar 12 01:41:01.841333 containerd[1452]: time="2026-03-12T01:41:01.841250696Z" level=info msg="StopPodSandbox for \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\"" Mar 12 01:41:01.842096 containerd[1452]: time="2026-03-12T01:41:01.842069115Z" level=info msg="StopPodSandbox for \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\"" Mar 12 01:41:01.843853 containerd[1452]: time="2026-03-12T01:41:01.843132525Z" level=info msg="Ensure that sandbox 39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32 in task-service has been cleanup successfully" Mar 12 01:41:01.843853 containerd[1452]: time="2026-03-12T01:41:01.842813820Z" level=info msg="Ensure that sandbox 0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0 in task-service has been cleanup successfully" Mar 12 01:41:01.843925 kubelet[2506]: I0312 01:41:01.843150 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Mar 12 01:41:01.845076 containerd[1452]: time="2026-03-12T01:41:01.845051125Z" level=info msg="StopPodSandbox for \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\"" Mar 12 01:41:01.845289 containerd[1452]: time="2026-03-12T01:41:01.845271415Z" level=info msg="Ensure that sandbox c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca in task-service has been cleanup successfully" Mar 12 01:41:01.850532 containerd[1452]: time="2026-03-12T01:41:01.850466701Z" level=info msg="StopPodSandbox for \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\"" Mar 12 01:41:01.850760 containerd[1452]: time="2026-03-12T01:41:01.850648761Z" level=info msg="Ensure that sandbox c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47 in task-service has been cleanup successfully" Mar 12 01:41:01.858090 kubelet[2506]: I0312 01:41:01.858033 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Mar 12 01:41:01.862010 containerd[1452]: time="2026-03-12T01:41:01.861529393Z" level=info msg="StopPodSandbox for \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\"" Mar 12 01:41:01.862010 containerd[1452]: time="2026-03-12T01:41:01.861792453Z" level=info msg="Ensure that sandbox 3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2 in task-service has been cleanup successfully" Mar 12 01:41:01.866500 kubelet[2506]: I0312 01:41:01.866472 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Mar 12 01:41:01.869947 containerd[1452]: time="2026-03-12T01:41:01.869685359Z" level=info msg="StopPodSandbox for \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\"" Mar 12 01:41:01.870032 containerd[1452]: time="2026-03-12T01:41:01.869954601Z" level=info msg="Ensure that sandbox 311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130 in task-service has been cleanup successfully" Mar 12 01:41:01.871584 kubelet[2506]: I0312 01:41:01.871342 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Mar 12 01:41:01.872937 containerd[1452]: time="2026-03-12T01:41:01.872618339Z" level=info msg="StopPodSandbox for \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\"" Mar 12 01:41:01.872937 containerd[1452]: time="2026-03-12T01:41:01.872912227Z" level=info msg="Ensure that sandbox 117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264 in task-service has been cleanup successfully" Mar 12 01:41:01.881472 kubelet[2506]: I0312 01:41:01.881440 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Mar 12 01:41:01.888662 containerd[1452]: time="2026-03-12T01:41:01.888580796Z" level=info msg="StopPodSandbox for \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\"" Mar 12 01:41:01.897155 containerd[1452]: time="2026-03-12T01:41:01.896972452Z" level=info msg="Ensure that sandbox f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0 in task-service has been cleanup successfully" Mar 12 01:41:01.995317 kubelet[2506]: I0312 01:41:01.995201 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6vfkj" podStartSLOduration=2.950570941 podStartE2EDuration="12.995182848s" podCreationTimestamp="2026-03-12 01:40:49 +0000 UTC" firstStartedPulling="2026-03-12 01:40:49.459869712 +0000 UTC m=+17.891325510" lastFinishedPulling="2026-03-12 01:40:59.504481619 +0000 UTC m=+27.935937417" observedRunningTime="2026-03-12 01:41:01.827146852 +0000 UTC m=+30.258602650" watchObservedRunningTime="2026-03-12 01:41:01.995182848 +0000 UTC m=+30.426638647" Mar 12 01:41:02.159454 containerd[1452]: 2026-03-12 01:41:01.993 [INFO][3694] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Mar 12 01:41:02.159454 containerd[1452]: 2026-03-12 01:41:01.993 [INFO][3694] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" iface="eth0" netns="/var/run/netns/cni-d237392b-7c7d-072d-6c14-18359552811c" Mar 12 01:41:02.159454 containerd[1452]: 2026-03-12 01:41:01.994 [INFO][3694] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" iface="eth0" netns="/var/run/netns/cni-d237392b-7c7d-072d-6c14-18359552811c" Mar 12 01:41:02.159454 containerd[1452]: 2026-03-12 01:41:01.994 [INFO][3694] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" iface="eth0" netns="/var/run/netns/cni-d237392b-7c7d-072d-6c14-18359552811c" Mar 12 01:41:02.159454 containerd[1452]: 2026-03-12 01:41:01.995 [INFO][3694] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Mar 12 01:41:02.159454 containerd[1452]: 2026-03-12 01:41:01.995 [INFO][3694] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Mar 12 01:41:02.159454 containerd[1452]: 2026-03-12 01:41:02.105 [INFO][3816] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" HandleID="k8s-pod-network.c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Workload="localhost-k8s-whisker--68964b7876--m247j-eth0" Mar 12 01:41:02.159454 containerd[1452]: 2026-03-12 01:41:02.106 [INFO][3816] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:02.159454 containerd[1452]: 2026-03-12 01:41:02.106 [INFO][3816] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:02.159454 containerd[1452]: 2026-03-12 01:41:02.128 [WARNING][3816] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" HandleID="k8s-pod-network.c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Workload="localhost-k8s-whisker--68964b7876--m247j-eth0" Mar 12 01:41:02.159454 containerd[1452]: 2026-03-12 01:41:02.129 [INFO][3816] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" HandleID="k8s-pod-network.c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Workload="localhost-k8s-whisker--68964b7876--m247j-eth0" Mar 12 01:41:02.159454 containerd[1452]: 2026-03-12 01:41:02.133 [INFO][3816] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:02.159454 containerd[1452]: 2026-03-12 01:41:02.151 [INFO][3694] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Mar 12 01:41:02.162425 containerd[1452]: time="2026-03-12T01:41:02.162396839Z" level=info msg="TearDown network for sandbox \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\" successfully" Mar 12 01:41:02.162528 containerd[1452]: time="2026-03-12T01:41:02.162511043Z" level=info msg="StopPodSandbox for \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\" returns successfully" Mar 12 01:41:02.162994 systemd[1]: run-netns-cni\x2dd237392b\x2d7c7d\x2d072d\x2d6c14\x2d18359552811c.mount: Deactivated successfully. Mar 12 01:41:02.168108 containerd[1452]: 2026-03-12 01:41:02.009 [INFO][3717] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Mar 12 01:41:02.168108 containerd[1452]: 2026-03-12 01:41:02.011 [INFO][3717] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" iface="eth0" netns="/var/run/netns/cni-a710a7a2-0a28-04d3-8cdb-6ca0af774142" Mar 12 01:41:02.168108 containerd[1452]: 2026-03-12 01:41:02.011 [INFO][3717] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" iface="eth0" netns="/var/run/netns/cni-a710a7a2-0a28-04d3-8cdb-6ca0af774142" Mar 12 01:41:02.168108 containerd[1452]: 2026-03-12 01:41:02.025 [INFO][3717] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" iface="eth0" netns="/var/run/netns/cni-a710a7a2-0a28-04d3-8cdb-6ca0af774142" Mar 12 01:41:02.168108 containerd[1452]: 2026-03-12 01:41:02.025 [INFO][3717] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Mar 12 01:41:02.168108 containerd[1452]: 2026-03-12 01:41:02.025 [INFO][3717] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Mar 12 01:41:02.168108 containerd[1452]: 2026-03-12 01:41:02.135 [INFO][3836] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" HandleID="k8s-pod-network.39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Workload="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:02.168108 containerd[1452]: 2026-03-12 01:41:02.135 [INFO][3836] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:02.168108 containerd[1452]: 2026-03-12 01:41:02.135 [INFO][3836] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:02.168108 containerd[1452]: 2026-03-12 01:41:02.143 [WARNING][3836] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" HandleID="k8s-pod-network.39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Workload="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:02.168108 containerd[1452]: 2026-03-12 01:41:02.143 [INFO][3836] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" HandleID="k8s-pod-network.39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Workload="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:02.168108 containerd[1452]: 2026-03-12 01:41:02.145 [INFO][3836] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:02.168108 containerd[1452]: 2026-03-12 01:41:02.151 [INFO][3717] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Mar 12 01:41:02.168888 containerd[1452]: time="2026-03-12T01:41:02.168223022Z" level=info msg="TearDown network for sandbox \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\" successfully" Mar 12 01:41:02.168888 containerd[1452]: time="2026-03-12T01:41:02.168239934Z" level=info msg="StopPodSandbox for \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\" returns successfully" Mar 12 01:41:02.172264 systemd[1]: run-netns-cni\x2da710a7a2\x2d0a28\x2d04d3\x2d8cdb\x2d6ca0af774142.mount: Deactivated successfully. Mar 12 01:41:02.174905 containerd[1452]: time="2026-03-12T01:41:02.174447626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-5mp72,Uid:fc846323-168d-4e27-8031-2f3b65d5b16e,Namespace:calico-system,Attempt:1,}" Mar 12 01:41:02.202920 containerd[1452]: 2026-03-12 01:41:02.009 [INFO][3740] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Mar 12 01:41:02.202920 containerd[1452]: 2026-03-12 01:41:02.010 [INFO][3740] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" iface="eth0" netns="/var/run/netns/cni-90e661f1-70db-4360-2300-bfa6267d2647" Mar 12 01:41:02.202920 containerd[1452]: 2026-03-12 01:41:02.010 [INFO][3740] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" iface="eth0" netns="/var/run/netns/cni-90e661f1-70db-4360-2300-bfa6267d2647" Mar 12 01:41:02.202920 containerd[1452]: 2026-03-12 01:41:02.011 [INFO][3740] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" iface="eth0" netns="/var/run/netns/cni-90e661f1-70db-4360-2300-bfa6267d2647" Mar 12 01:41:02.202920 containerd[1452]: 2026-03-12 01:41:02.011 [INFO][3740] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Mar 12 01:41:02.202920 containerd[1452]: 2026-03-12 01:41:02.011 [INFO][3740] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Mar 12 01:41:02.202920 containerd[1452]: 2026-03-12 01:41:02.123 [INFO][3827] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" HandleID="k8s-pod-network.3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Workload="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:02.202920 containerd[1452]: 2026-03-12 01:41:02.123 [INFO][3827] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:02.202920 containerd[1452]: 2026-03-12 01:41:02.146 [INFO][3827] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:02.202920 containerd[1452]: 2026-03-12 01:41:02.171 [WARNING][3827] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" HandleID="k8s-pod-network.3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Workload="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:02.202920 containerd[1452]: 2026-03-12 01:41:02.171 [INFO][3827] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" HandleID="k8s-pod-network.3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Workload="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:02.202920 containerd[1452]: 2026-03-12 01:41:02.175 [INFO][3827] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:02.202920 containerd[1452]: 2026-03-12 01:41:02.195 [INFO][3740] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Mar 12 01:41:02.204302 containerd[1452]: 2026-03-12 01:41:02.089 [INFO][3784] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Mar 12 01:41:02.204302 containerd[1452]: 2026-03-12 01:41:02.089 [INFO][3784] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" iface="eth0" netns="/var/run/netns/cni-ca5b9de2-efd9-f7e0-86f8-d47fd2091be2" Mar 12 01:41:02.204302 containerd[1452]: 2026-03-12 01:41:02.090 [INFO][3784] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" iface="eth0" netns="/var/run/netns/cni-ca5b9de2-efd9-f7e0-86f8-d47fd2091be2" Mar 12 01:41:02.204302 containerd[1452]: 2026-03-12 01:41:02.095 [INFO][3784] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" iface="eth0" netns="/var/run/netns/cni-ca5b9de2-efd9-f7e0-86f8-d47fd2091be2" Mar 12 01:41:02.204302 containerd[1452]: 2026-03-12 01:41:02.095 [INFO][3784] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Mar 12 01:41:02.204302 containerd[1452]: 2026-03-12 01:41:02.095 [INFO][3784] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Mar 12 01:41:02.204302 containerd[1452]: 2026-03-12 01:41:02.156 [INFO][3867] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" HandleID="k8s-pod-network.117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Workload="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:02.204302 containerd[1452]: 2026-03-12 01:41:02.157 [INFO][3867] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:02.204302 containerd[1452]: 2026-03-12 01:41:02.175 [INFO][3867] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:02.204302 containerd[1452]: 2026-03-12 01:41:02.188 [WARNING][3867] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" HandleID="k8s-pod-network.117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Workload="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:02.204302 containerd[1452]: 2026-03-12 01:41:02.188 [INFO][3867] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" HandleID="k8s-pod-network.117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Workload="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:02.204302 containerd[1452]: 2026-03-12 01:41:02.190 [INFO][3867] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:02.204302 containerd[1452]: 2026-03-12 01:41:02.197 [INFO][3784] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Mar 12 01:41:02.206911 containerd[1452]: time="2026-03-12T01:41:02.206882340Z" level=info msg="TearDown network for sandbox \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\" successfully" Mar 12 01:41:02.207893 systemd[1]: run-netns-cni\x2d90e661f1\x2d70db\x2d4360\x2d2300\x2dbfa6267d2647.mount: Deactivated successfully. Mar 12 01:41:02.208028 systemd[1]: run-netns-cni\x2dca5b9de2\x2defd9\x2df7e0\x2d86f8\x2dd47fd2091be2.mount: Deactivated successfully. Mar 12 01:41:02.209974 containerd[1452]: time="2026-03-12T01:41:02.209924765Z" level=info msg="StopPodSandbox for \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\" returns successfully" Mar 12 01:41:02.211592 containerd[1452]: time="2026-03-12T01:41:02.210853730Z" level=info msg="TearDown network for sandbox \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\" successfully" Mar 12 01:41:02.211592 containerd[1452]: time="2026-03-12T01:41:02.210878227Z" level=info msg="StopPodSandbox for \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\" returns successfully" Mar 12 01:41:02.214144 containerd[1452]: time="2026-03-12T01:41:02.214115803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59985c4b9f-ccgf4,Uid:12faea7f-d095-4822-97da-4521ffa445d4,Namespace:calico-system,Attempt:1,}" Mar 12 01:41:02.214418 containerd[1452]: time="2026-03-12T01:41:02.214398029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p5zx6,Uid:82884de9-9bde-4318-9c0a-85cc3f92ad06,Namespace:calico-system,Attempt:1,}" Mar 12 01:41:02.222663 containerd[1452]: 2026-03-12 01:41:02.048 [INFO][3744] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Mar 12 01:41:02.222663 containerd[1452]: 2026-03-12 01:41:02.049 [INFO][3744] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" iface="eth0" netns="/var/run/netns/cni-1da3c59c-b626-a09d-f910-c041d78801e3" Mar 12 01:41:02.222663 containerd[1452]: 2026-03-12 01:41:02.049 [INFO][3744] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" iface="eth0" netns="/var/run/netns/cni-1da3c59c-b626-a09d-f910-c041d78801e3" Mar 12 01:41:02.222663 containerd[1452]: 2026-03-12 01:41:02.049 [INFO][3744] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" iface="eth0" netns="/var/run/netns/cni-1da3c59c-b626-a09d-f910-c041d78801e3" Mar 12 01:41:02.222663 containerd[1452]: 2026-03-12 01:41:02.049 [INFO][3744] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Mar 12 01:41:02.222663 containerd[1452]: 2026-03-12 01:41:02.049 [INFO][3744] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Mar 12 01:41:02.222663 containerd[1452]: 2026-03-12 01:41:02.154 [INFO][3843] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" HandleID="k8s-pod-network.0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Workload="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:02.222663 containerd[1452]: 2026-03-12 01:41:02.158 [INFO][3843] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:02.222663 containerd[1452]: 2026-03-12 01:41:02.193 [INFO][3843] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:02.222663 containerd[1452]: 2026-03-12 01:41:02.206 [WARNING][3843] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" HandleID="k8s-pod-network.0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Workload="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:02.222663 containerd[1452]: 2026-03-12 01:41:02.206 [INFO][3843] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" HandleID="k8s-pod-network.0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Workload="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:02.222663 containerd[1452]: 2026-03-12 01:41:02.208 [INFO][3843] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:02.222663 containerd[1452]: 2026-03-12 01:41:02.215 [INFO][3744] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Mar 12 01:41:02.223514 containerd[1452]: time="2026-03-12T01:41:02.223152331Z" level=info msg="TearDown network for sandbox \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\" successfully" Mar 12 01:41:02.223514 containerd[1452]: time="2026-03-12T01:41:02.223180815Z" level=info msg="StopPodSandbox for \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\" returns successfully" Mar 12 01:41:02.224935 containerd[1452]: time="2026-03-12T01:41:02.224910279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59985c4b9f-jlw2l,Uid:0c627ae3-0e93-40e3-80e0-9ad9ac70f824,Namespace:calico-system,Attempt:1,}" Mar 12 01:41:02.250820 containerd[1452]: 2026-03-12 01:41:02.060 [INFO][3775] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Mar 12 01:41:02.250820 containerd[1452]: 2026-03-12 01:41:02.061 [INFO][3775] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" iface="eth0" netns="/var/run/netns/cni-c48a8b0c-25b3-8f65-a072-dec7f5e3ef83" Mar 12 01:41:02.250820 containerd[1452]: 2026-03-12 01:41:02.061 [INFO][3775] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" iface="eth0" netns="/var/run/netns/cni-c48a8b0c-25b3-8f65-a072-dec7f5e3ef83" Mar 12 01:41:02.250820 containerd[1452]: 2026-03-12 01:41:02.061 [INFO][3775] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" iface="eth0" netns="/var/run/netns/cni-c48a8b0c-25b3-8f65-a072-dec7f5e3ef83" Mar 12 01:41:02.250820 containerd[1452]: 2026-03-12 01:41:02.061 [INFO][3775] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Mar 12 01:41:02.250820 containerd[1452]: 2026-03-12 01:41:02.061 [INFO][3775] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Mar 12 01:41:02.250820 containerd[1452]: 2026-03-12 01:41:02.199 [INFO][3854] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" HandleID="k8s-pod-network.311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Workload="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:02.250820 containerd[1452]: 2026-03-12 01:41:02.199 [INFO][3854] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:02.250820 containerd[1452]: 2026-03-12 01:41:02.225 [INFO][3854] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:02.250820 containerd[1452]: 2026-03-12 01:41:02.234 [WARNING][3854] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" HandleID="k8s-pod-network.311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Workload="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:02.250820 containerd[1452]: 2026-03-12 01:41:02.234 [INFO][3854] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" HandleID="k8s-pod-network.311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Workload="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:02.250820 containerd[1452]: 2026-03-12 01:41:02.237 [INFO][3854] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:02.250820 containerd[1452]: 2026-03-12 01:41:02.245 [INFO][3775] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Mar 12 01:41:02.251455 containerd[1452]: time="2026-03-12T01:41:02.251295129Z" level=info msg="TearDown network for sandbox \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\" successfully" Mar 12 01:41:02.251455 containerd[1452]: time="2026-03-12T01:41:02.251400265Z" level=info msg="StopPodSandbox for \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\" returns successfully" Mar 12 01:41:02.255028 containerd[1452]: 2026-03-12 01:41:02.048 [INFO][3704] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Mar 12 01:41:02.255028 containerd[1452]: 2026-03-12 01:41:02.048 [INFO][3704] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" iface="eth0" netns="/var/run/netns/cni-3a0457a6-062a-0252-3702-79caa8d5d13b" Mar 12 01:41:02.255028 containerd[1452]: 2026-03-12 01:41:02.049 [INFO][3704] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" iface="eth0" netns="/var/run/netns/cni-3a0457a6-062a-0252-3702-79caa8d5d13b" Mar 12 01:41:02.255028 containerd[1452]: 2026-03-12 01:41:02.049 [INFO][3704] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" iface="eth0" netns="/var/run/netns/cni-3a0457a6-062a-0252-3702-79caa8d5d13b" Mar 12 01:41:02.255028 containerd[1452]: 2026-03-12 01:41:02.049 [INFO][3704] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Mar 12 01:41:02.255028 containerd[1452]: 2026-03-12 01:41:02.049 [INFO][3704] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Mar 12 01:41:02.255028 containerd[1452]: 2026-03-12 01:41:02.176 [INFO][3838] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" HandleID="k8s-pod-network.c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Workload="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:02.255028 containerd[1452]: 2026-03-12 01:41:02.176 [INFO][3838] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:02.255028 containerd[1452]: 2026-03-12 01:41:02.209 [INFO][3838] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:02.255028 containerd[1452]: 2026-03-12 01:41:02.221 [WARNING][3838] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" HandleID="k8s-pod-network.c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Workload="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:02.255028 containerd[1452]: 2026-03-12 01:41:02.222 [INFO][3838] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" HandleID="k8s-pod-network.c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Workload="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:02.255028 containerd[1452]: 2026-03-12 01:41:02.225 [INFO][3838] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:02.255028 containerd[1452]: 2026-03-12 01:41:02.240 [INFO][3704] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Mar 12 01:41:02.256114 containerd[1452]: time="2026-03-12T01:41:02.255694311Z" level=info msg="TearDown network for sandbox \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\" successfully" Mar 12 01:41:02.256114 containerd[1452]: time="2026-03-12T01:41:02.255982969Z" level=info msg="StopPodSandbox for \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\" returns successfully" Mar 12 01:41:02.256653 containerd[1452]: time="2026-03-12T01:41:02.256587947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549d945fd5-vwvh9,Uid:e4a33aa0-00da-4a2a-8056-f0a9ced024aa,Namespace:calico-system,Attempt:1,}" Mar 12 01:41:02.257312 kubelet[2506]: E0312 01:41:02.257188 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:02.260229 containerd[1452]: time="2026-03-12T01:41:02.260105807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-srzp7,Uid:5f897c92-34a7-46c6-8c17-18c4147d9a5f,Namespace:kube-system,Attempt:1,}" Mar 12 01:41:02.270260 containerd[1452]: 2026-03-12 01:41:02.071 [INFO][3788] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Mar 12 01:41:02.270260 containerd[1452]: 2026-03-12 01:41:02.075 [INFO][3788] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" iface="eth0" netns="/var/run/netns/cni-d08291ea-4a68-8816-81dd-54164486a149" Mar 12 01:41:02.270260 containerd[1452]: 2026-03-12 01:41:02.081 [INFO][3788] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" iface="eth0" netns="/var/run/netns/cni-d08291ea-4a68-8816-81dd-54164486a149" Mar 12 01:41:02.270260 containerd[1452]: 2026-03-12 01:41:02.081 [INFO][3788] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" iface="eth0" netns="/var/run/netns/cni-d08291ea-4a68-8816-81dd-54164486a149" Mar 12 01:41:02.270260 containerd[1452]: 2026-03-12 01:41:02.083 [INFO][3788] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Mar 12 01:41:02.270260 containerd[1452]: 2026-03-12 01:41:02.083 [INFO][3788] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Mar 12 01:41:02.270260 containerd[1452]: 2026-03-12 01:41:02.205 [INFO][3863] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" HandleID="k8s-pod-network.f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Workload="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:02.270260 containerd[1452]: 2026-03-12 01:41:02.207 [INFO][3863] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:02.270260 containerd[1452]: 2026-03-12 01:41:02.238 [INFO][3863] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:02.270260 containerd[1452]: 2026-03-12 01:41:02.249 [WARNING][3863] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" HandleID="k8s-pod-network.f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Workload="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:02.270260 containerd[1452]: 2026-03-12 01:41:02.249 [INFO][3863] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" HandleID="k8s-pod-network.f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Workload="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:02.270260 containerd[1452]: 2026-03-12 01:41:02.254 [INFO][3863] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:02.270260 containerd[1452]: 2026-03-12 01:41:02.259 [INFO][3788] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Mar 12 01:41:02.270655 containerd[1452]: time="2026-03-12T01:41:02.270628729Z" level=info msg="TearDown network for sandbox \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\" successfully" Mar 12 01:41:02.270681 containerd[1452]: time="2026-03-12T01:41:02.270662041Z" level=info msg="StopPodSandbox for \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\" returns successfully" Mar 12 01:41:02.271190 kubelet[2506]: E0312 01:41:02.271118 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:02.271991 containerd[1452]: time="2026-03-12T01:41:02.271683204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ldd4z,Uid:c3461470-30f4-4645-8713-5c638c154c77,Namespace:kube-system,Attempt:1,}" Mar 12 01:41:02.291371 kubelet[2506]: I0312 01:41:02.290941 2506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67144ade-3bd8-4b48-8f66-b2e43284d340-whisker-ca-bundle\") pod \"67144ade-3bd8-4b48-8f66-b2e43284d340\" (UID: \"67144ade-3bd8-4b48-8f66-b2e43284d340\") " Mar 12 01:41:02.291371 kubelet[2506]: I0312 01:41:02.290977 2506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/67144ade-3bd8-4b48-8f66-b2e43284d340-nginx-config\") pod \"67144ade-3bd8-4b48-8f66-b2e43284d340\" (UID: \"67144ade-3bd8-4b48-8f66-b2e43284d340\") " Mar 12 01:41:02.291371 kubelet[2506]: I0312 01:41:02.291002 2506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlftd\" (UniqueName: \"kubernetes.io/projected/67144ade-3bd8-4b48-8f66-b2e43284d340-kube-api-access-tlftd\") pod \"67144ade-3bd8-4b48-8f66-b2e43284d340\" (UID: \"67144ade-3bd8-4b48-8f66-b2e43284d340\") " Mar 12 01:41:02.291371 kubelet[2506]: I0312 01:41:02.291017 2506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/67144ade-3bd8-4b48-8f66-b2e43284d340-whisker-backend-key-pair\") pod \"67144ade-3bd8-4b48-8f66-b2e43284d340\" (UID: \"67144ade-3bd8-4b48-8f66-b2e43284d340\") " Mar 12 01:41:02.293406 kubelet[2506]: I0312 01:41:02.292810 2506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67144ade-3bd8-4b48-8f66-b2e43284d340-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "67144ade-3bd8-4b48-8f66-b2e43284d340" (UID: "67144ade-3bd8-4b48-8f66-b2e43284d340"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:41:02.293406 kubelet[2506]: I0312 01:41:02.293168 2506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67144ade-3bd8-4b48-8f66-b2e43284d340-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "67144ade-3bd8-4b48-8f66-b2e43284d340" (UID: "67144ade-3bd8-4b48-8f66-b2e43284d340"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:41:02.297876 kubelet[2506]: I0312 01:41:02.297845 2506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67144ade-3bd8-4b48-8f66-b2e43284d340-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "67144ade-3bd8-4b48-8f66-b2e43284d340" (UID: "67144ade-3bd8-4b48-8f66-b2e43284d340"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 01:41:02.301617 kubelet[2506]: I0312 01:41:02.301498 2506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67144ade-3bd8-4b48-8f66-b2e43284d340-kube-api-access-tlftd" (OuterVolumeSpecName: "kube-api-access-tlftd") pod "67144ade-3bd8-4b48-8f66-b2e43284d340" (UID: "67144ade-3bd8-4b48-8f66-b2e43284d340"). InnerVolumeSpecName "kube-api-access-tlftd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 01:41:02.392344 kubelet[2506]: I0312 01:41:02.392274 2506 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tlftd\" (UniqueName: \"kubernetes.io/projected/67144ade-3bd8-4b48-8f66-b2e43284d340-kube-api-access-tlftd\") on node \"localhost\" DevicePath \"\"" Mar 12 01:41:02.392344 kubelet[2506]: I0312 01:41:02.392306 2506 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/67144ade-3bd8-4b48-8f66-b2e43284d340-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 12 01:41:02.392344 kubelet[2506]: I0312 01:41:02.392315 2506 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67144ade-3bd8-4b48-8f66-b2e43284d340-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 12 01:41:02.392344 kubelet[2506]: I0312 01:41:02.392323 2506 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/67144ade-3bd8-4b48-8f66-b2e43284d340-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 12 01:41:02.503618 systemd-networkd[1379]: calic779f0ed5d9: Link UP Mar 12 01:41:02.508398 systemd-networkd[1379]: calic779f0ed5d9: Gained carrier Mar 12 01:41:02.530076 systemd[1]: run-netns-cni\x2d1da3c59c\x2db626\x2da09d\x2df910\x2dc041d78801e3.mount: Deactivated successfully. Mar 12 01:41:02.530190 systemd[1]: run-netns-cni\x2d3a0457a6\x2d062a\x2d0252\x2d3702\x2d79caa8d5d13b.mount: Deactivated successfully. Mar 12 01:41:02.530260 systemd[1]: run-netns-cni\x2dd08291ea\x2d4a68\x2d8816\x2d81dd\x2d54164486a149.mount: Deactivated successfully. Mar 12 01:41:02.530461 systemd[1]: run-netns-cni\x2dc48a8b0c\x2d25b3\x2d8f65\x2da072\x2ddec7f5e3ef83.mount: Deactivated successfully. Mar 12 01:41:02.530639 systemd[1]: var-lib-kubelet-pods-67144ade\x2d3bd8\x2d4b48\x2d8f66\x2db2e43284d340-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtlftd.mount: Deactivated successfully. Mar 12 01:41:02.530767 systemd[1]: var-lib-kubelet-pods-67144ade\x2d3bd8\x2d4b48\x2d8f66\x2db2e43284d340-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.279 [ERROR][3895] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.303 [INFO][3895] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--5mp72-eth0 goldmane-5b85766d88- calico-system fc846323-168d-4e27-8031-2f3b65d5b16e 888 0 2026-03-12 01:40:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-5mp72 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic779f0ed5d9 [] [] }} ContainerID="a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" Namespace="calico-system" Pod="goldmane-5b85766d88-5mp72" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--5mp72-" Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.304 [INFO][3895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" Namespace="calico-system" Pod="goldmane-5b85766d88-5mp72" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.361 [INFO][3935] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" HandleID="k8s-pod-network.a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" Workload="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.374 [INFO][3935] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" HandleID="k8s-pod-network.a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" Workload="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000591160), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-5mp72", "timestamp":"2026-03-12 01:41:02.361000969 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005da6e0)} Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.374 [INFO][3935] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.375 [INFO][3935] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.375 [INFO][3935] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.380 [INFO][3935] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" host="localhost" Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.389 [INFO][3935] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.401 [INFO][3935] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.405 [INFO][3935] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.409 [INFO][3935] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.410 [INFO][3935] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" host="localhost" Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.412 [INFO][3935] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57 Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.417 [INFO][3935] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" host="localhost" Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.430 [INFO][3935] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" host="localhost" Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.433 [INFO][3935] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" host="localhost" Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.433 [INFO][3935] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:02.546214 containerd[1452]: 2026-03-12 01:41:02.434 [INFO][3935] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" HandleID="k8s-pod-network.a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" Workload="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:02.546979 containerd[1452]: 2026-03-12 01:41:02.447 [INFO][3895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" Namespace="calico-system" Pod="goldmane-5b85766d88-5mp72" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--5mp72-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"fc846323-168d-4e27-8031-2f3b65d5b16e", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-5mp72", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic779f0ed5d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:02.546979 containerd[1452]: 2026-03-12 01:41:02.448 [INFO][3895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" Namespace="calico-system" Pod="goldmane-5b85766d88-5mp72" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:02.546979 containerd[1452]: 2026-03-12 01:41:02.449 [INFO][3895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic779f0ed5d9 ContainerID="a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" Namespace="calico-system" Pod="goldmane-5b85766d88-5mp72" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:02.546979 containerd[1452]: 2026-03-12 01:41:02.509 [INFO][3895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" Namespace="calico-system" Pod="goldmane-5b85766d88-5mp72" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:02.546979 containerd[1452]: 2026-03-12 01:41:02.513 [INFO][3895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" Namespace="calico-system" Pod="goldmane-5b85766d88-5mp72" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--5mp72-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"fc846323-168d-4e27-8031-2f3b65d5b16e", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57", Pod:"goldmane-5b85766d88-5mp72", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic779f0ed5d9", MAC:"46:a4:31:b8:85:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:02.546979 containerd[1452]: 2026-03-12 01:41:02.536 [INFO][3895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57" Namespace="calico-system" Pod="goldmane-5b85766d88-5mp72" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:02.631020 systemd-networkd[1379]: cali355fa62d5e7: Link UP Mar 12 01:41:02.633330 systemd-networkd[1379]: cali355fa62d5e7: Gained carrier Mar 12 01:41:02.637225 containerd[1452]: time="2026-03-12T01:41:02.636187001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:41:02.637225 containerd[1452]: time="2026-03-12T01:41:02.636764539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:41:02.637225 containerd[1452]: time="2026-03-12T01:41:02.636779557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:02.638173 containerd[1452]: time="2026-03-12T01:41:02.637945180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.330 [ERROR][3908] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.353 [INFO][3908] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0 calico-apiserver-59985c4b9f- calico-system 12faea7f-d095-4822-97da-4521ffa445d4 894 0 2026-03-12 01:40:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59985c4b9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59985c4b9f-ccgf4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali355fa62d5e7 [] [] }} ContainerID="8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-ccgf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-" Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.353 [INFO][3908] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-ccgf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.442 [INFO][3975] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" HandleID="k8s-pod-network.8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" Workload="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.452 [INFO][3975] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" HandleID="k8s-pod-network.8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" Workload="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139b90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-59985c4b9f-ccgf4", "timestamp":"2026-03-12 01:41:02.442338892 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ac840)} Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.452 [INFO][3975] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.453 [INFO][3975] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.453 [INFO][3975] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.481 [INFO][3975] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" host="localhost" Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.500 [INFO][3975] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.558 [INFO][3975] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.571 [INFO][3975] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.578 [INFO][3975] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.578 [INFO][3975] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" host="localhost" Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.587 [INFO][3975] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.597 [INFO][3975] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" host="localhost" Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.610 [INFO][3975] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" host="localhost" Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.610 [INFO][3975] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" host="localhost" Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.610 [INFO][3975] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:02.678435 containerd[1452]: 2026-03-12 01:41:02.615 [INFO][3975] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" HandleID="k8s-pod-network.8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" Workload="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:02.679050 containerd[1452]: 2026-03-12 01:41:02.626 [INFO][3908] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-ccgf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0", GenerateName:"calico-apiserver-59985c4b9f-", Namespace:"calico-system", SelfLink:"", UID:"12faea7f-d095-4822-97da-4521ffa445d4", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59985c4b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59985c4b9f-ccgf4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali355fa62d5e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:02.679050 containerd[1452]: 2026-03-12 01:41:02.626 [INFO][3908] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-ccgf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:02.679050 containerd[1452]: 2026-03-12 01:41:02.626 [INFO][3908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali355fa62d5e7 ContainerID="8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-ccgf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:02.679050 containerd[1452]: 2026-03-12 01:41:02.632 [INFO][3908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-ccgf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:02.679050 containerd[1452]: 2026-03-12 01:41:02.632 [INFO][3908] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-ccgf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0", GenerateName:"calico-apiserver-59985c4b9f-", Namespace:"calico-system", SelfLink:"", UID:"12faea7f-d095-4822-97da-4521ffa445d4", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59985c4b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b", Pod:"calico-apiserver-59985c4b9f-ccgf4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali355fa62d5e7", MAC:"b6:c8:98:c1:cc:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:02.679050 containerd[1452]: 2026-03-12 01:41:02.657 [INFO][3908] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-ccgf4" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:02.686935 systemd[1]: Started cri-containerd-a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57.scope - libcontainer container a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57. Mar 12 01:41:02.691824 systemd-networkd[1379]: calib6a115484ac: Link UP Mar 12 01:41:02.694039 systemd-networkd[1379]: calib6a115484ac: Gained carrier Mar 12 01:41:02.716864 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.388 [ERROR][3941] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.411 [INFO][3941] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0 calico-apiserver-59985c4b9f- calico-system 0c627ae3-0e93-40e3-80e0-9ad9ac70f824 891 0 2026-03-12 01:40:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59985c4b9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59985c4b9f-jlw2l eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib6a115484ac [] [] }} ContainerID="5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-jlw2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-" Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.412 [INFO][3941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-jlw2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.566 [INFO][4010] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" HandleID="k8s-pod-network.5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" Workload="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.590 [INFO][4010] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" HandleID="k8s-pod-network.5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" Workload="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00046e170), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-59985c4b9f-jlw2l", "timestamp":"2026-03-12 01:41:02.566337777 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fec60)} Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.590 [INFO][4010] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.610 [INFO][4010] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.610 [INFO][4010] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.617 [INFO][4010] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" host="localhost" Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.626 [INFO][4010] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.634 [INFO][4010] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.641 [INFO][4010] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.648 [INFO][4010] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.649 [INFO][4010] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" host="localhost" Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.651 [INFO][4010] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.666 [INFO][4010] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" host="localhost" Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.677 [INFO][4010] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" host="localhost" Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.677 [INFO][4010] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" host="localhost" Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.677 [INFO][4010] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:02.740427 containerd[1452]: 2026-03-12 01:41:02.677 [INFO][4010] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" HandleID="k8s-pod-network.5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" Workload="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:02.741056 containerd[1452]: 2026-03-12 01:41:02.685 [INFO][3941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-jlw2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0", GenerateName:"calico-apiserver-59985c4b9f-", Namespace:"calico-system", SelfLink:"", UID:"0c627ae3-0e93-40e3-80e0-9ad9ac70f824", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59985c4b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59985c4b9f-jlw2l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib6a115484ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:02.741056 containerd[1452]: 2026-03-12 01:41:02.685 [INFO][3941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-jlw2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:02.741056 containerd[1452]: 2026-03-12 01:41:02.685 [INFO][3941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6a115484ac ContainerID="5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-jlw2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:02.741056 containerd[1452]: 2026-03-12 01:41:02.698 [INFO][3941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-jlw2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:02.741056 containerd[1452]: 2026-03-12 01:41:02.703 [INFO][3941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-jlw2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0", GenerateName:"calico-apiserver-59985c4b9f-", Namespace:"calico-system", SelfLink:"", UID:"0c627ae3-0e93-40e3-80e0-9ad9ac70f824", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59985c4b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f", Pod:"calico-apiserver-59985c4b9f-jlw2l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib6a115484ac", MAC:"a2:cd:5f:69:3c:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:02.741056 containerd[1452]: 2026-03-12 01:41:02.734 [INFO][3941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f" Namespace="calico-system" Pod="calico-apiserver-59985c4b9f-jlw2l" WorkloadEndpoint="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:02.773811 containerd[1452]: time="2026-03-12T01:41:02.772946592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:41:02.782605 containerd[1452]: time="2026-03-12T01:41:02.782089218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:41:02.782605 containerd[1452]: time="2026-03-12T01:41:02.782118743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:02.782605 containerd[1452]: time="2026-03-12T01:41:02.782215393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:02.818782 containerd[1452]: time="2026-03-12T01:41:02.816863315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:41:02.818782 containerd[1452]: time="2026-03-12T01:41:02.816955747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:41:02.818782 containerd[1452]: time="2026-03-12T01:41:02.816978780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:02.824866 containerd[1452]: time="2026-03-12T01:41:02.824077314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:02.841054 containerd[1452]: time="2026-03-12T01:41:02.840962990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-5mp72,Uid:fc846323-168d-4e27-8031-2f3b65d5b16e,Namespace:calico-system,Attempt:1,} returns sandbox id \"a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57\"" Mar 12 01:41:02.846024 containerd[1452]: time="2026-03-12T01:41:02.844942491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 12 01:41:02.868978 systemd[1]: Started cri-containerd-8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b.scope - libcontainer container 8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b. Mar 12 01:41:02.891963 systemd[1]: Started cri-containerd-5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f.scope - libcontainer container 5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f. Mar 12 01:41:02.915116 systemd[1]: Removed slice kubepods-besteffort-pod67144ade_3bd8_4b48_8f66_b2e43284d340.slice - libcontainer container kubepods-besteffort-pod67144ade_3bd8_4b48_8f66_b2e43284d340.slice. Mar 12 01:41:02.916338 systemd-networkd[1379]: cali7b2066cfc42: Link UP Mar 12 01:41:02.916949 systemd-networkd[1379]: cali7b2066cfc42: Gained carrier Mar 12 01:41:02.949138 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.370 [ERROR][3921] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.394 [INFO][3921] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--p5zx6-eth0 csi-node-driver- calico-system 82884de9-9bde-4318-9c0a-85cc3f92ad06 889 0 2026-03-12 01:40:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-p5zx6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7b2066cfc42 [] [] }} ContainerID="88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" Namespace="calico-system" Pod="csi-node-driver-p5zx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--p5zx6-" Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.394 [INFO][3921] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" Namespace="calico-system" Pod="csi-node-driver-p5zx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.567 [INFO][4003] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" HandleID="k8s-pod-network.88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" Workload="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.596 [INFO][4003] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" HandleID="k8s-pod-network.88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" Workload="localhost-k8s-csi--node--driver--p5zx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000459ec0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-p5zx6", "timestamp":"2026-03-12 01:41:02.56765845 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000141760)} Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.601 [INFO][4003] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.685 [INFO][4003] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.685 [INFO][4003] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.731 [INFO][4003] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" host="localhost" Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.750 [INFO][4003] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.769 [INFO][4003] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.778 [INFO][4003] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.788 [INFO][4003] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.788 [INFO][4003] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" host="localhost" Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.803 [INFO][4003] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5 Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.813 [INFO][4003] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" host="localhost" Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.835 [INFO][4003] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" host="localhost" Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.835 [INFO][4003] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" host="localhost" Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.835 [INFO][4003] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:02.971200 containerd[1452]: 2026-03-12 01:41:02.835 [INFO][4003] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" HandleID="k8s-pod-network.88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" Workload="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:02.973382 containerd[1452]: 2026-03-12 01:41:02.887 [INFO][3921] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" Namespace="calico-system" Pod="csi-node-driver-p5zx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--p5zx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p5zx6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"82884de9-9bde-4318-9c0a-85cc3f92ad06", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-p5zx6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7b2066cfc42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:02.973382 containerd[1452]: 2026-03-12 01:41:02.887 [INFO][3921] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" Namespace="calico-system" Pod="csi-node-driver-p5zx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:02.973382 containerd[1452]: 2026-03-12 01:41:02.887 [INFO][3921] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b2066cfc42 ContainerID="88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" Namespace="calico-system" Pod="csi-node-driver-p5zx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:02.973382 containerd[1452]: 2026-03-12 01:41:02.915 [INFO][3921] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" Namespace="calico-system" Pod="csi-node-driver-p5zx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:02.973382 containerd[1452]: 2026-03-12 01:41:02.931 [INFO][3921] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" Namespace="calico-system" Pod="csi-node-driver-p5zx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--p5zx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p5zx6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"82884de9-9bde-4318-9c0a-85cc3f92ad06", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5", Pod:"csi-node-driver-p5zx6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7b2066cfc42", MAC:"da:e5:d7:7e:0a:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:02.973382 containerd[1452]: 2026-03-12 01:41:02.960 [INFO][3921] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5" Namespace="calico-system" Pod="csi-node-driver-p5zx6" WorkloadEndpoint="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:03.006607 systemd-networkd[1379]: cali41ef96b4cbd: Link UP Mar 12 01:41:03.013393 systemd-networkd[1379]: cali41ef96b4cbd: Gained carrier Mar 12 01:41:03.043597 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:41:03.070080 systemd[1]: Created slice kubepods-besteffort-pod39547a72_dd6a_4541_9c9a_9ee27ea1c765.slice - libcontainer container kubepods-besteffort-pod39547a72_dd6a_4541_9c9a_9ee27ea1c765.slice. Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.462 [ERROR][3958] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.481 [INFO][3958] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--srzp7-eth0 coredns-674b8bbfcf- kube-system 5f897c92-34a7-46c6-8c17-18c4147d9a5f 892 0 2026-03-12 01:40:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-srzp7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali41ef96b4cbd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" Namespace="kube-system" Pod="coredns-674b8bbfcf-srzp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--srzp7-" Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.481 [INFO][3958] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" Namespace="kube-system" Pod="coredns-674b8bbfcf-srzp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.598 [INFO][4039] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" HandleID="k8s-pod-network.4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" Workload="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.615 [INFO][4039] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" HandleID="k8s-pod-network.4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" Workload="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4460), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-srzp7", "timestamp":"2026-03-12 01:41:02.598268565 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003fb080)} Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.616 [INFO][4039] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.850 [INFO][4039] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.850 [INFO][4039] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.859 [INFO][4039] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" host="localhost" Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.877 [INFO][4039] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.891 [INFO][4039] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.896 [INFO][4039] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.904 [INFO][4039] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.904 [INFO][4039] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" host="localhost" Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.909 [INFO][4039] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.917 [INFO][4039] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" host="localhost" Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.955 [INFO][4039] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" host="localhost" Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.964 [INFO][4039] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" host="localhost" Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.964 [INFO][4039] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:03.093828 containerd[1452]: 2026-03-12 01:41:02.965 [INFO][4039] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" HandleID="k8s-pod-network.4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" Workload="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:03.095965 containerd[1452]: 2026-03-12 01:41:02.973 [INFO][3958] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" Namespace="kube-system" Pod="coredns-674b8bbfcf-srzp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--srzp7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5f897c92-34a7-46c6-8c17-18c4147d9a5f", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-srzp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41ef96b4cbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:03.095965 containerd[1452]: 2026-03-12 01:41:02.973 [INFO][3958] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" Namespace="kube-system" Pod="coredns-674b8bbfcf-srzp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:03.095965 containerd[1452]: 2026-03-12 01:41:02.973 [INFO][3958] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41ef96b4cbd ContainerID="4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" Namespace="kube-system" Pod="coredns-674b8bbfcf-srzp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:03.095965 containerd[1452]: 2026-03-12 01:41:03.017 [INFO][3958] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" Namespace="kube-system" Pod="coredns-674b8bbfcf-srzp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:03.095965 containerd[1452]: 2026-03-12 01:41:03.020 [INFO][3958] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" Namespace="kube-system" Pod="coredns-674b8bbfcf-srzp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--srzp7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5f897c92-34a7-46c6-8c17-18c4147d9a5f", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a", Pod:"coredns-674b8bbfcf-srzp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41ef96b4cbd", MAC:"96:75:e0:74:52:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:03.095965 containerd[1452]: 2026-03-12 01:41:03.075 [INFO][3958] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a" Namespace="kube-system" Pod="coredns-674b8bbfcf-srzp7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:03.097909 containerd[1452]: time="2026-03-12T01:41:03.096918303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:41:03.097909 containerd[1452]: time="2026-03-12T01:41:03.097018420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:41:03.097909 containerd[1452]: time="2026-03-12T01:41:03.097049979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:03.097909 containerd[1452]: time="2026-03-12T01:41:03.097155005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:03.126800 containerd[1452]: time="2026-03-12T01:41:03.126413732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59985c4b9f-jlw2l,Uid:0c627ae3-0e93-40e3-80e0-9ad9ac70f824,Namespace:calico-system,Attempt:1,} returns sandbox id \"5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f\"" Mar 12 01:41:03.153969 systemd[1]: Started cri-containerd-88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5.scope - libcontainer container 88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5. Mar 12 01:41:03.202975 kubelet[2506]: I0312 01:41:03.202857 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/39547a72-dd6a-4541-9c9a-9ee27ea1c765-nginx-config\") pod \"whisker-5b58656d69-dxqf6\" (UID: \"39547a72-dd6a-4541-9c9a-9ee27ea1c765\") " pod="calico-system/whisker-5b58656d69-dxqf6" Mar 12 01:41:03.202975 kubelet[2506]: I0312 01:41:03.202937 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c78g\" (UniqueName: \"kubernetes.io/projected/39547a72-dd6a-4541-9c9a-9ee27ea1c765-kube-api-access-4c78g\") pod \"whisker-5b58656d69-dxqf6\" (UID: \"39547a72-dd6a-4541-9c9a-9ee27ea1c765\") " pod="calico-system/whisker-5b58656d69-dxqf6" Mar 12 01:41:03.202975 kubelet[2506]: I0312 01:41:03.202967 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39547a72-dd6a-4541-9c9a-9ee27ea1c765-whisker-ca-bundle\") pod \"whisker-5b58656d69-dxqf6\" (UID: \"39547a72-dd6a-4541-9c9a-9ee27ea1c765\") " pod="calico-system/whisker-5b58656d69-dxqf6" Mar 12 01:41:03.203517 kubelet[2506]: I0312 01:41:03.202987 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39547a72-dd6a-4541-9c9a-9ee27ea1c765-whisker-backend-key-pair\") pod \"whisker-5b58656d69-dxqf6\" (UID: \"39547a72-dd6a-4541-9c9a-9ee27ea1c765\") " pod="calico-system/whisker-5b58656d69-dxqf6" Mar 12 01:41:03.211846 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:41:03.218904 systemd-networkd[1379]: cali19fc59a31f8: Link UP Mar 12 01:41:03.219868 systemd-networkd[1379]: cali19fc59a31f8: Gained carrier Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:02.436 [ERROR][3956] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:02.453 [INFO][3956] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0 calico-kube-controllers-549d945fd5- calico-system e4a33aa0-00da-4a2a-8056-f0a9ced024aa 890 0 2026-03-12 01:40:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:549d945fd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-549d945fd5-vwvh9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali19fc59a31f8 [] [] }} ContainerID="973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" Namespace="calico-system" Pod="calico-kube-controllers-549d945fd5-vwvh9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-" Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:02.453 [INFO][3956] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" Namespace="calico-system" Pod="calico-kube-controllers-549d945fd5-vwvh9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:02.608 [INFO][4030] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" HandleID="k8s-pod-network.973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" Workload="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:02.622 [INFO][4030] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" HandleID="k8s-pod-network.973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" Workload="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-549d945fd5-vwvh9", "timestamp":"2026-03-12 01:41:02.608820652 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000238dc0)} Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:02.622 [INFO][4030] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:02.966 [INFO][4030] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:02.967 [INFO][4030] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:02.989 [INFO][4030] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" host="localhost" Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:03.109 [INFO][4030] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:03.120 [INFO][4030] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:03.129 [INFO][4030] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:03.141 [INFO][4030] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:03.141 [INFO][4030] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" host="localhost" Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:03.146 [INFO][4030] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:03.153 [INFO][4030] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" host="localhost" Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:03.179 [INFO][4030] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" host="localhost" Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:03.180 [INFO][4030] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" host="localhost" Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:03.181 [INFO][4030] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:03.263844 containerd[1452]: 2026-03-12 01:41:03.181 [INFO][4030] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" HandleID="k8s-pod-network.973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" Workload="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:03.264993 containerd[1452]: 2026-03-12 01:41:03.197 [INFO][3956] cni-plugin/k8s.go 418: Populated endpoint ContainerID="973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" Namespace="calico-system" Pod="calico-kube-controllers-549d945fd5-vwvh9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0", GenerateName:"calico-kube-controllers-549d945fd5-", Namespace:"calico-system", SelfLink:"", UID:"e4a33aa0-00da-4a2a-8056-f0a9ced024aa", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"549d945fd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-549d945fd5-vwvh9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali19fc59a31f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:03.264993 containerd[1452]: 2026-03-12 01:41:03.197 [INFO][3956] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" Namespace="calico-system" Pod="calico-kube-controllers-549d945fd5-vwvh9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:03.264993 containerd[1452]: 2026-03-12 01:41:03.198 [INFO][3956] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19fc59a31f8 ContainerID="973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" Namespace="calico-system" Pod="calico-kube-controllers-549d945fd5-vwvh9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:03.264993 containerd[1452]: 2026-03-12 01:41:03.221 [INFO][3956] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" Namespace="calico-system" Pod="calico-kube-controllers-549d945fd5-vwvh9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:03.264993 containerd[1452]: 2026-03-12 01:41:03.221 [INFO][3956] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" Namespace="calico-system" Pod="calico-kube-controllers-549d945fd5-vwvh9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0", GenerateName:"calico-kube-controllers-549d945fd5-", Namespace:"calico-system", SelfLink:"", UID:"e4a33aa0-00da-4a2a-8056-f0a9ced024aa", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"549d945fd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda", Pod:"calico-kube-controllers-549d945fd5-vwvh9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali19fc59a31f8", MAC:"be:63:cc:ee:91:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:03.264993 containerd[1452]: 2026-03-12 01:41:03.242 [INFO][3956] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda" Namespace="calico-system" Pod="calico-kube-controllers-549d945fd5-vwvh9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:03.268146 containerd[1452]: time="2026-03-12T01:41:03.267443605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:41:03.268146 containerd[1452]: time="2026-03-12T01:41:03.267636093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:41:03.268146 containerd[1452]: time="2026-03-12T01:41:03.267662152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:03.268501 containerd[1452]: time="2026-03-12T01:41:03.267874578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:03.278056 containerd[1452]: time="2026-03-12T01:41:03.277819212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p5zx6,Uid:82884de9-9bde-4318-9c0a-85cc3f92ad06,Namespace:calico-system,Attempt:1,} returns sandbox id \"88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5\"" Mar 12 01:41:03.292037 containerd[1452]: time="2026-03-12T01:41:03.291268061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59985c4b9f-ccgf4,Uid:12faea7f-d095-4822-97da-4521ffa445d4,Namespace:calico-system,Attempt:1,} returns sandbox id \"8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b\"" Mar 12 01:41:03.328958 systemd[1]: Started cri-containerd-4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a.scope - libcontainer container 4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a. Mar 12 01:41:03.335446 systemd-networkd[1379]: cali6877bd29110: Link UP Mar 12 01:41:03.338274 systemd-networkd[1379]: cali6877bd29110: Gained carrier Mar 12 01:41:03.372894 containerd[1452]: time="2026-03-12T01:41:03.372687588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:41:03.374156 containerd[1452]: time="2026-03-12T01:41:03.374060669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:41:03.374156 containerd[1452]: time="2026-03-12T01:41:03.374141700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:03.374441 containerd[1452]: time="2026-03-12T01:41:03.374388399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:03.375342 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:02.463 [ERROR][3965] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:02.501 [INFO][3965] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0 coredns-674b8bbfcf- kube-system c3461470-30f4-4645-8713-5c638c154c77 893 0 2026-03-12 01:40:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-ldd4z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6877bd29110 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldd4z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldd4z-" Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:02.501 [INFO][3965] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldd4z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:02.629 [INFO][4046] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" HandleID="k8s-pod-network.cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" Workload="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:02.647 [INFO][4046] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" HandleID="k8s-pod-network.cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" Workload="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003317b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-ldd4z", "timestamp":"2026-03-12 01:41:02.629946516 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003c71e0)} Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:02.647 [INFO][4046] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:03.186 [INFO][4046] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:03.187 [INFO][4046] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:03.196 [INFO][4046] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" host="localhost" Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:03.233 [INFO][4046] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:03.262 [INFO][4046] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:03.267 [INFO][4046] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:03.274 [INFO][4046] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:03.274 [INFO][4046] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" host="localhost" Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:03.288 [INFO][4046] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:03.295 [INFO][4046] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" host="localhost" Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:03.315 [INFO][4046] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" host="localhost" Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:03.315 [INFO][4046] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" host="localhost" Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:03.315 [INFO][4046] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:03.375853 containerd[1452]: 2026-03-12 01:41:03.315 [INFO][4046] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" HandleID="k8s-pod-network.cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" Workload="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:03.376506 containerd[1452]: 2026-03-12 01:41:03.325 [INFO][3965] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldd4z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c3461470-30f4-4645-8713-5c638c154c77", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-ldd4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6877bd29110", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:03.376506 containerd[1452]: 2026-03-12 01:41:03.325 [INFO][3965] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldd4z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:03.376506 containerd[1452]: 2026-03-12 01:41:03.325 [INFO][3965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6877bd29110 ContainerID="cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldd4z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:03.376506 containerd[1452]: 2026-03-12 01:41:03.343 [INFO][3965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldd4z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:03.376506 containerd[1452]: 2026-03-12 01:41:03.347 [INFO][3965] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldd4z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c3461470-30f4-4645-8713-5c638c154c77", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b", Pod:"coredns-674b8bbfcf-ldd4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6877bd29110", MAC:"c6:59:a4:a7:30:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:03.376506 containerd[1452]: 2026-03-12 01:41:03.362 [INFO][3965] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b" Namespace="kube-system" Pod="coredns-674b8bbfcf-ldd4z" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:03.398886 containerd[1452]: time="2026-03-12T01:41:03.396668049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b58656d69-dxqf6,Uid:39547a72-dd6a-4541-9c9a-9ee27ea1c765,Namespace:calico-system,Attempt:0,}" Mar 12 01:41:03.429010 systemd[1]: Started cri-containerd-973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda.scope - libcontainer container 973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda. Mar 12 01:41:03.464637 containerd[1452]: time="2026-03-12T01:41:03.463484540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:41:03.464637 containerd[1452]: time="2026-03-12T01:41:03.463586028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:41:03.464637 containerd[1452]: time="2026-03-12T01:41:03.463602067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:03.464637 containerd[1452]: time="2026-03-12T01:41:03.463689210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:03.481386 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:41:03.486942 containerd[1452]: time="2026-03-12T01:41:03.485691212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-srzp7,Uid:5f897c92-34a7-46c6-8c17-18c4147d9a5f,Namespace:kube-system,Attempt:1,} returns sandbox id \"4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a\"" Mar 12 01:41:03.490806 kubelet[2506]: E0312 01:41:03.490764 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:03.504118 containerd[1452]: time="2026-03-12T01:41:03.504080155Z" level=info msg="CreateContainer within sandbox \"4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:41:03.519906 systemd[1]: Started cri-containerd-cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b.scope - libcontainer container cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b. Mar 12 01:41:03.562984 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:41:03.576798 containerd[1452]: time="2026-03-12T01:41:03.576760395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549d945fd5-vwvh9,Uid:e4a33aa0-00da-4a2a-8056-f0a9ced024aa,Namespace:calico-system,Attempt:1,} returns sandbox id \"973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda\"" Mar 12 01:41:03.583218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount59832189.mount: Deactivated successfully. Mar 12 01:41:03.595823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount686108536.mount: Deactivated successfully. Mar 12 01:41:03.606849 containerd[1452]: time="2026-03-12T01:41:03.606810268Z" level=info msg="CreateContainer within sandbox \"4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66075a25395f4d6f2978aa0b1181238c29c1e7cc618d59d641a257366ca85486\"" Mar 12 01:41:03.608845 containerd[1452]: time="2026-03-12T01:41:03.608665288Z" level=info msg="StartContainer for \"66075a25395f4d6f2978aa0b1181238c29c1e7cc618d59d641a257366ca85486\"" Mar 12 01:41:03.619450 containerd[1452]: time="2026-03-12T01:41:03.619407869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ldd4z,Uid:c3461470-30f4-4645-8713-5c638c154c77,Namespace:kube-system,Attempt:1,} returns sandbox id \"cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b\"" Mar 12 01:41:03.621962 kubelet[2506]: E0312 01:41:03.621809 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:03.631064 containerd[1452]: time="2026-03-12T01:41:03.630969087Z" level=info msg="CreateContainer within sandbox \"cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:41:03.657953 containerd[1452]: time="2026-03-12T01:41:03.656266726Z" level=info msg="CreateContainer within sandbox \"cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0ab0cbfb8d9877d75464fafa4b256de15b547dc79121e0f32eb944856bdd8ef\"" Mar 12 01:41:03.661790 containerd[1452]: time="2026-03-12T01:41:03.661646348Z" level=info msg="StartContainer for \"c0ab0cbfb8d9877d75464fafa4b256de15b547dc79121e0f32eb944856bdd8ef\"" Mar 12 01:41:03.681127 systemd[1]: Started cri-containerd-66075a25395f4d6f2978aa0b1181238c29c1e7cc618d59d641a257366ca85486.scope - libcontainer container 66075a25395f4d6f2978aa0b1181238c29c1e7cc618d59d641a257366ca85486. Mar 12 01:41:03.703349 kubelet[2506]: I0312 01:41:03.703187 2506 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67144ade-3bd8-4b48-8f66-b2e43284d340" path="/var/lib/kubelet/pods/67144ade-3bd8-4b48-8f66-b2e43284d340/volumes" Mar 12 01:41:03.714900 systemd-networkd[1379]: calia90eb29cd53: Link UP Mar 12 01:41:03.719291 systemd-networkd[1379]: calia90eb29cd53: Gained carrier Mar 12 01:41:03.752037 systemd[1]: Started cri-containerd-c0ab0cbfb8d9877d75464fafa4b256de15b547dc79121e0f32eb944856bdd8ef.scope - libcontainer container c0ab0cbfb8d9877d75464fafa4b256de15b547dc79121e0f32eb944856bdd8ef. Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.486 [ERROR][4476] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.508 [INFO][4476] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5b58656d69--dxqf6-eth0 whisker-5b58656d69- calico-system 39547a72-dd6a-4541-9c9a-9ee27ea1c765 929 0 2026-03-12 01:41:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b58656d69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5b58656d69-dxqf6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia90eb29cd53 [] [] }} ContainerID="e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" Namespace="calico-system" Pod="whisker-5b58656d69-dxqf6" WorkloadEndpoint="localhost-k8s-whisker--5b58656d69--dxqf6-" Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.509 [INFO][4476] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" Namespace="calico-system" Pod="whisker-5b58656d69-dxqf6" WorkloadEndpoint="localhost-k8s-whisker--5b58656d69--dxqf6-eth0" Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.610 [INFO][4535] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" HandleID="k8s-pod-network.e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" Workload="localhost-k8s-whisker--5b58656d69--dxqf6-eth0" Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.628 [INFO][4535] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" HandleID="k8s-pod-network.e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" Workload="localhost-k8s-whisker--5b58656d69--dxqf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5c20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5b58656d69-dxqf6", "timestamp":"2026-03-12 01:41:03.61009893 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e8160)} Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.628 [INFO][4535] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.628 [INFO][4535] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.628 [INFO][4535] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.634 [INFO][4535] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" host="localhost" Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.648 [INFO][4535] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.659 [INFO][4535] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.670 [INFO][4535] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.674 [INFO][4535] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.674 [INFO][4535] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" host="localhost" Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.677 [INFO][4535] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75 Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.685 [INFO][4535] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" host="localhost" Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.704 [INFO][4535] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" host="localhost" Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.704 [INFO][4535] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" host="localhost" Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.705 [INFO][4535] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:03.771482 containerd[1452]: 2026-03-12 01:41:03.705 [INFO][4535] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" HandleID="k8s-pod-network.e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" Workload="localhost-k8s-whisker--5b58656d69--dxqf6-eth0" Mar 12 01:41:03.773280 containerd[1452]: 2026-03-12 01:41:03.709 [INFO][4476] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" Namespace="calico-system" Pod="whisker-5b58656d69-dxqf6" WorkloadEndpoint="localhost-k8s-whisker--5b58656d69--dxqf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b58656d69--dxqf6-eth0", GenerateName:"whisker-5b58656d69-", Namespace:"calico-system", SelfLink:"", UID:"39547a72-dd6a-4541-9c9a-9ee27ea1c765", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b58656d69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5b58656d69-dxqf6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia90eb29cd53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:03.773280 containerd[1452]: 2026-03-12 01:41:03.709 [INFO][4476] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" Namespace="calico-system" Pod="whisker-5b58656d69-dxqf6" WorkloadEndpoint="localhost-k8s-whisker--5b58656d69--dxqf6-eth0" Mar 12 01:41:03.773280 containerd[1452]: 2026-03-12 01:41:03.709 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia90eb29cd53 ContainerID="e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" Namespace="calico-system" Pod="whisker-5b58656d69-dxqf6" WorkloadEndpoint="localhost-k8s-whisker--5b58656d69--dxqf6-eth0" Mar 12 01:41:03.773280 containerd[1452]: 2026-03-12 01:41:03.720 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" Namespace="calico-system" Pod="whisker-5b58656d69-dxqf6" WorkloadEndpoint="localhost-k8s-whisker--5b58656d69--dxqf6-eth0" Mar 12 01:41:03.773280 containerd[1452]: 2026-03-12 01:41:03.720 [INFO][4476] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" Namespace="calico-system" Pod="whisker-5b58656d69-dxqf6" WorkloadEndpoint="localhost-k8s-whisker--5b58656d69--dxqf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b58656d69--dxqf6-eth0", GenerateName:"whisker-5b58656d69-", Namespace:"calico-system", SelfLink:"", UID:"39547a72-dd6a-4541-9c9a-9ee27ea1c765", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b58656d69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75", Pod:"whisker-5b58656d69-dxqf6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia90eb29cd53", MAC:"36:df:a7:4d:0a:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:03.773280 containerd[1452]: 2026-03-12 01:41:03.754 [INFO][4476] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75" Namespace="calico-system" Pod="whisker-5b58656d69-dxqf6" WorkloadEndpoint="localhost-k8s-whisker--5b58656d69--dxqf6-eth0" Mar 12 01:41:03.830689 containerd[1452]: time="2026-03-12T01:41:03.830525540Z" level=info msg="StartContainer for \"66075a25395f4d6f2978aa0b1181238c29c1e7cc618d59d641a257366ca85486\" returns successfully" Mar 12 01:41:03.830689 containerd[1452]: time="2026-03-12T01:41:03.830641586Z" level=info msg="StartContainer for \"c0ab0cbfb8d9877d75464fafa4b256de15b547dc79121e0f32eb944856bdd8ef\" returns successfully" Mar 12 01:41:03.851966 containerd[1452]: time="2026-03-12T01:41:03.849347208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:41:03.851966 containerd[1452]: time="2026-03-12T01:41:03.849638290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:41:03.851966 containerd[1452]: time="2026-03-12T01:41:03.849652276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:03.852946 containerd[1452]: time="2026-03-12T01:41:03.850229733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:41:03.889802 systemd[1]: Started cri-containerd-e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75.scope - libcontainer container e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75. Mar 12 01:41:03.892786 kubelet[2506]: E0312 01:41:03.892645 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:03.911261 kubelet[2506]: E0312 01:41:03.911132 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:03.924980 systemd-resolved[1386]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:41:03.982173 kubelet[2506]: I0312 01:41:03.982074 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ldd4z" podStartSLOduration=25.982054071 podStartE2EDuration="25.982054071s" podCreationTimestamp="2026-03-12 01:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:41:03.929873261 +0000 UTC m=+32.361329079" watchObservedRunningTime="2026-03-12 01:41:03.982054071 +0000 UTC m=+32.413509869" Mar 12 01:41:04.021149 containerd[1452]: time="2026-03-12T01:41:04.021034923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b58656d69-dxqf6,Uid:39547a72-dd6a-4541-9c9a-9ee27ea1c765,Namespace:calico-system,Attempt:0,} returns sandbox id \"e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75\"" Mar 12 01:41:04.081251 systemd-networkd[1379]: calib6a115484ac: Gained IPv6LL Mar 12 01:41:04.142902 systemd-networkd[1379]: cali355fa62d5e7: Gained IPv6LL Mar 12 01:41:04.333947 systemd-networkd[1379]: calic779f0ed5d9: Gained IPv6LL Mar 12 01:41:04.335066 systemd-networkd[1379]: cali19fc59a31f8: Gained IPv6LL Mar 12 01:41:04.397988 systemd-networkd[1379]: cali41ef96b4cbd: Gained IPv6LL Mar 12 01:41:04.526883 systemd-networkd[1379]: cali7b2066cfc42: Gained IPv6LL Mar 12 01:41:04.897627 containerd[1452]: time="2026-03-12T01:41:04.897405597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:04.899651 containerd[1452]: time="2026-03-12T01:41:04.899414987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 12 01:41:04.900761 containerd[1452]: time="2026-03-12T01:41:04.900500458Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:04.904154 containerd[1452]: time="2026-03-12T01:41:04.904015303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:04.905311 containerd[1452]: time="2026-03-12T01:41:04.905222547Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.060241064s" Mar 12 01:41:04.905311 containerd[1452]: time="2026-03-12T01:41:04.905298058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 12 01:41:04.908054 containerd[1452]: time="2026-03-12T01:41:04.907130605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 01:41:04.914369 containerd[1452]: time="2026-03-12T01:41:04.913793713Z" level=info msg="CreateContainer within sandbox \"a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 12 01:41:04.922398 kubelet[2506]: E0312 01:41:04.921409 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:04.922398 kubelet[2506]: E0312 01:41:04.921620 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:04.936026 kubelet[2506]: I0312 01:41:04.935462 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-srzp7" podStartSLOduration=26.935385699 podStartE2EDuration="26.935385699s" podCreationTimestamp="2026-03-12 01:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:41:03.990828101 +0000 UTC m=+32.422283920" watchObservedRunningTime="2026-03-12 01:41:04.935385699 +0000 UTC m=+33.366841497" Mar 12 01:41:04.943648 containerd[1452]: time="2026-03-12T01:41:04.943463278Z" level=info msg="CreateContainer within sandbox \"a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ed05721b7dc3cb877572f11ce3de2d9913c778894b2199b1309af992ab3050fe\"" Mar 12 01:41:04.950271 containerd[1452]: time="2026-03-12T01:41:04.947341540Z" level=info msg="StartContainer for \"ed05721b7dc3cb877572f11ce3de2d9913c778894b2199b1309af992ab3050fe\"" Mar 12 01:41:05.012434 systemd[1]: Started cri-containerd-ed05721b7dc3cb877572f11ce3de2d9913c778894b2199b1309af992ab3050fe.scope - libcontainer container ed05721b7dc3cb877572f11ce3de2d9913c778894b2199b1309af992ab3050fe. Mar 12 01:41:05.068861 containerd[1452]: time="2026-03-12T01:41:05.068793080Z" level=info msg="StartContainer for \"ed05721b7dc3cb877572f11ce3de2d9913c778894b2199b1309af992ab3050fe\" returns successfully" Mar 12 01:41:05.230121 systemd-networkd[1379]: cali6877bd29110: Gained IPv6LL Mar 12 01:41:05.423125 systemd-networkd[1379]: calia90eb29cd53: Gained IPv6LL Mar 12 01:41:05.926932 kubelet[2506]: E0312 01:41:05.926878 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:05.929685 kubelet[2506]: E0312 01:41:05.929637 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:05.951034 kubelet[2506]: I0312 01:41:05.950917 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-5mp72" podStartSLOduration=15.887008479 podStartE2EDuration="17.950899904s" podCreationTimestamp="2026-03-12 01:40:48 +0000 UTC" firstStartedPulling="2026-03-12 01:41:02.84264169 +0000 UTC m=+31.274097489" lastFinishedPulling="2026-03-12 01:41:04.906533115 +0000 UTC m=+33.337988914" observedRunningTime="2026-03-12 01:41:05.950433219 +0000 UTC m=+34.381889057" watchObservedRunningTime="2026-03-12 01:41:05.950899904 +0000 UTC m=+34.382355702" Mar 12 01:41:06.546345 containerd[1452]: time="2026-03-12T01:41:06.546260033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:06.547680 containerd[1452]: time="2026-03-12T01:41:06.547538404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 12 01:41:06.549414 containerd[1452]: time="2026-03-12T01:41:06.549330063Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:06.552637 containerd[1452]: time="2026-03-12T01:41:06.552605164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:06.553606 containerd[1452]: time="2026-03-12T01:41:06.553516075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.646349002s" Mar 12 01:41:06.553606 containerd[1452]: time="2026-03-12T01:41:06.553541693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 12 01:41:06.555457 containerd[1452]: time="2026-03-12T01:41:06.555287114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 12 01:41:06.558532 containerd[1452]: time="2026-03-12T01:41:06.558314134Z" level=info msg="CreateContainer within sandbox \"5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 01:41:06.586838 containerd[1452]: time="2026-03-12T01:41:06.586302453Z" level=info msg="CreateContainer within sandbox \"5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ace4847fe3a6cceb7500b806adf74b162704c6e439be29adf6754dfb2a9dc52e\"" Mar 12 01:41:06.587592 containerd[1452]: time="2026-03-12T01:41:06.587478425Z" level=info msg="StartContainer for \"ace4847fe3a6cceb7500b806adf74b162704c6e439be29adf6754dfb2a9dc52e\"" Mar 12 01:41:06.635029 systemd[1]: Started cri-containerd-ace4847fe3a6cceb7500b806adf74b162704c6e439be29adf6754dfb2a9dc52e.scope - libcontainer container ace4847fe3a6cceb7500b806adf74b162704c6e439be29adf6754dfb2a9dc52e. Mar 12 01:41:06.700525 containerd[1452]: time="2026-03-12T01:41:06.700458294Z" level=info msg="StartContainer for \"ace4847fe3a6cceb7500b806adf74b162704c6e439be29adf6754dfb2a9dc52e\" returns successfully" Mar 12 01:41:06.888451 kubelet[2506]: I0312 01:41:06.888217 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:41:06.890035 kubelet[2506]: E0312 01:41:06.889512 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:06.936805 kubelet[2506]: E0312 01:41:06.936395 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:06.937308 kubelet[2506]: E0312 01:41:06.937189 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:06.938899 kubelet[2506]: I0312 01:41:06.938305 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:41:06.938899 kubelet[2506]: E0312 01:41:06.938322 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:41:06.957674 kubelet[2506]: I0312 01:41:06.957529 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-59985c4b9f-jlw2l" podStartSLOduration=15.537518111 podStartE2EDuration="18.957513452s" podCreationTimestamp="2026-03-12 01:40:48 +0000 UTC" firstStartedPulling="2026-03-12 01:41:03.13493997 +0000 UTC m=+31.566395769" lastFinishedPulling="2026-03-12 01:41:06.554935302 +0000 UTC m=+34.986391110" observedRunningTime="2026-03-12 01:41:06.9572334 +0000 UTC m=+35.388689228" watchObservedRunningTime="2026-03-12 01:41:06.957513452 +0000 UTC m=+35.388969251" Mar 12 01:41:07.281115 containerd[1452]: time="2026-03-12T01:41:07.281026088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:07.283274 containerd[1452]: time="2026-03-12T01:41:07.283102964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 12 01:41:07.285843 containerd[1452]: time="2026-03-12T01:41:07.285681894Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:07.290664 containerd[1452]: time="2026-03-12T01:41:07.290623920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:07.292305 containerd[1452]: time="2026-03-12T01:41:07.292249429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 736.931417ms" Mar 12 01:41:07.292305 containerd[1452]: time="2026-03-12T01:41:07.292297970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 12 01:41:07.295075 containerd[1452]: time="2026-03-12T01:41:07.294977357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 01:41:07.304034 containerd[1452]: time="2026-03-12T01:41:07.303506648Z" level=info msg="CreateContainer within sandbox \"88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 12 01:41:07.347041 containerd[1452]: time="2026-03-12T01:41:07.346989040Z" level=info msg="CreateContainer within sandbox \"88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"221d208a960eeb29e8eeb4953b8839ceda17f8f341a414df87bdccb753694160\"" Mar 12 01:41:07.348793 containerd[1452]: time="2026-03-12T01:41:07.348630158Z" level=info msg="StartContainer for \"221d208a960eeb29e8eeb4953b8839ceda17f8f341a414df87bdccb753694160\"" Mar 12 01:41:07.392923 systemd[1]: Started cri-containerd-221d208a960eeb29e8eeb4953b8839ceda17f8f341a414df87bdccb753694160.scope - libcontainer container 221d208a960eeb29e8eeb4953b8839ceda17f8f341a414df87bdccb753694160. Mar 12 01:41:07.429403 containerd[1452]: time="2026-03-12T01:41:07.429142578Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:07.433754 containerd[1452]: time="2026-03-12T01:41:07.433002107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 12 01:41:07.434686 containerd[1452]: time="2026-03-12T01:41:07.434633780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 139.629513ms" Mar 12 01:41:07.434844 containerd[1452]: time="2026-03-12T01:41:07.434685827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 12 01:41:07.437379 containerd[1452]: time="2026-03-12T01:41:07.437278755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 12 01:41:07.442171 containerd[1452]: time="2026-03-12T01:41:07.442088585Z" level=info msg="CreateContainer within sandbox \"8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 01:41:07.459250 containerd[1452]: time="2026-03-12T01:41:07.459164866Z" level=info msg="StartContainer for \"221d208a960eeb29e8eeb4953b8839ceda17f8f341a414df87bdccb753694160\" returns successfully" Mar 12 01:41:07.477133 containerd[1452]: time="2026-03-12T01:41:07.477046867Z" level=info msg="CreateContainer within sandbox \"8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e02bdba7dc34e2ce38cd19df8890b74bfe9b7c018a02d05285bd81d3f6aa4d89\"" Mar 12 01:41:07.478777 containerd[1452]: time="2026-03-12T01:41:07.478192351Z" level=info msg="StartContainer for \"e02bdba7dc34e2ce38cd19df8890b74bfe9b7c018a02d05285bd81d3f6aa4d89\"" Mar 12 01:41:07.522306 systemd[1]: Started cri-containerd-e02bdba7dc34e2ce38cd19df8890b74bfe9b7c018a02d05285bd81d3f6aa4d89.scope - libcontainer container e02bdba7dc34e2ce38cd19df8890b74bfe9b7c018a02d05285bd81d3f6aa4d89. Mar 12 01:41:07.618855 kernel: calico-node[4892]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 12 01:41:07.619528 containerd[1452]: time="2026-03-12T01:41:07.619067843Z" level=info msg="StartContainer for \"e02bdba7dc34e2ce38cd19df8890b74bfe9b7c018a02d05285bd81d3f6aa4d89\" returns successfully" Mar 12 01:41:07.945330 kubelet[2506]: I0312 01:41:07.945306 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:41:08.481273 systemd-networkd[1379]: vxlan.calico: Link UP Mar 12 01:41:08.481281 systemd-networkd[1379]: vxlan.calico: Gained carrier Mar 12 01:41:08.948119 kubelet[2506]: I0312 01:41:08.948085 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:41:09.954515 containerd[1452]: time="2026-03-12T01:41:09.954450153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:09.955790 containerd[1452]: time="2026-03-12T01:41:09.955697820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 12 01:41:09.957244 containerd[1452]: time="2026-03-12T01:41:09.957197128Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:09.963654 containerd[1452]: time="2026-03-12T01:41:09.963543145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:09.964480 containerd[1452]: time="2026-03-12T01:41:09.964411937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.527007446s" Mar 12 01:41:09.964480 containerd[1452]: time="2026-03-12T01:41:09.964466007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 12 01:41:09.967648 containerd[1452]: time="2026-03-12T01:41:09.967621959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 12 01:41:09.994959 containerd[1452]: time="2026-03-12T01:41:09.994899517Z" level=info msg="CreateContainer within sandbox \"973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 12 01:41:10.038953 containerd[1452]: time="2026-03-12T01:41:10.038849535Z" level=info msg="CreateContainer within sandbox \"973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"dd6cfde97efd31e3109d9226d686fd91c1417fb6aa84b026b33b58424ab5631a\"" Mar 12 01:41:10.039418 containerd[1452]: time="2026-03-12T01:41:10.039380124Z" level=info msg="StartContainer for \"dd6cfde97efd31e3109d9226d686fd91c1417fb6aa84b026b33b58424ab5631a\"" Mar 12 01:41:10.114978 systemd[1]: Started cri-containerd-dd6cfde97efd31e3109d9226d686fd91c1417fb6aa84b026b33b58424ab5631a.scope - libcontainer container dd6cfde97efd31e3109d9226d686fd91c1417fb6aa84b026b33b58424ab5631a. Mar 12 01:41:10.158009 systemd-networkd[1379]: vxlan.calico: Gained IPv6LL Mar 12 01:41:10.180154 containerd[1452]: time="2026-03-12T01:41:10.178968866Z" level=info msg="StartContainer for \"dd6cfde97efd31e3109d9226d686fd91c1417fb6aa84b026b33b58424ab5631a\" returns successfully" Mar 12 01:41:10.651434 containerd[1452]: time="2026-03-12T01:41:10.651363372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:10.652910 containerd[1452]: time="2026-03-12T01:41:10.652816124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 12 01:41:10.654176 containerd[1452]: time="2026-03-12T01:41:10.654085480Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:10.682328 containerd[1452]: time="2026-03-12T01:41:10.682106555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:10.688123 containerd[1452]: time="2026-03-12T01:41:10.688075710Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 720.273254ms" Mar 12 01:41:10.688123 containerd[1452]: time="2026-03-12T01:41:10.688118440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 12 01:41:10.689456 containerd[1452]: time="2026-03-12T01:41:10.689366898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 12 01:41:10.693788 containerd[1452]: time="2026-03-12T01:41:10.693596289Z" level=info msg="CreateContainer within sandbox \"e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 12 01:41:10.712691 containerd[1452]: time="2026-03-12T01:41:10.712617203Z" level=info msg="CreateContainer within sandbox \"e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"61c29ce9d5b71b2b77d5be0bbbcd1bb4a946ebf172933f5db76e6b726968f3a0\"" Mar 12 01:41:10.713750 containerd[1452]: time="2026-03-12T01:41:10.713648336Z" level=info msg="StartContainer for \"61c29ce9d5b71b2b77d5be0bbbcd1bb4a946ebf172933f5db76e6b726968f3a0\"" Mar 12 01:41:10.752928 systemd[1]: Started cri-containerd-61c29ce9d5b71b2b77d5be0bbbcd1bb4a946ebf172933f5db76e6b726968f3a0.scope - libcontainer container 61c29ce9d5b71b2b77d5be0bbbcd1bb4a946ebf172933f5db76e6b726968f3a0. Mar 12 01:41:10.803211 containerd[1452]: time="2026-03-12T01:41:10.803156555Z" level=info msg="StartContainer for \"61c29ce9d5b71b2b77d5be0bbbcd1bb4a946ebf172933f5db76e6b726968f3a0\" returns successfully" Mar 12 01:41:10.973141 kubelet[2506]: I0312 01:41:10.972516 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-59985c4b9f-ccgf4" podStartSLOduration=18.834579983 podStartE2EDuration="22.97248888s" podCreationTimestamp="2026-03-12 01:40:48 +0000 UTC" firstStartedPulling="2026-03-12 01:41:03.298318326 +0000 UTC m=+31.729774124" lastFinishedPulling="2026-03-12 01:41:07.436227223 +0000 UTC m=+35.867683021" observedRunningTime="2026-03-12 01:41:07.965457093 +0000 UTC m=+36.396912911" watchObservedRunningTime="2026-03-12 01:41:10.97248888 +0000 UTC m=+39.403944688" Mar 12 01:41:10.973141 kubelet[2506]: I0312 01:41:10.972779 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-549d945fd5-vwvh9" podStartSLOduration=15.585535344 podStartE2EDuration="21.972772138s" podCreationTimestamp="2026-03-12 01:40:49 +0000 UTC" firstStartedPulling="2026-03-12 01:41:03.578261974 +0000 UTC m=+32.009717772" lastFinishedPulling="2026-03-12 01:41:09.965498768 +0000 UTC m=+38.396954566" observedRunningTime="2026-03-12 01:41:10.971844787 +0000 UTC m=+39.403300585" watchObservedRunningTime="2026-03-12 01:41:10.972772138 +0000 UTC m=+39.404227936" Mar 12 01:41:11.482142 containerd[1452]: time="2026-03-12T01:41:11.482011984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:11.483129 containerd[1452]: time="2026-03-12T01:41:11.483025798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 12 01:41:11.484410 containerd[1452]: time="2026-03-12T01:41:11.484358529Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:11.487002 containerd[1452]: time="2026-03-12T01:41:11.486905875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:11.487954 containerd[1452]: time="2026-03-12T01:41:11.487845472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 798.45477ms" Mar 12 01:41:11.487954 containerd[1452]: time="2026-03-12T01:41:11.487911416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 12 01:41:11.489108 containerd[1452]: time="2026-03-12T01:41:11.489023605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 12 01:41:11.493583 containerd[1452]: time="2026-03-12T01:41:11.493498744Z" level=info msg="CreateContainer within sandbox \"88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 12 01:41:11.539442 containerd[1452]: time="2026-03-12T01:41:11.538902919Z" level=info msg="CreateContainer within sandbox \"88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"76b2f72f06bcc1358656dc140530422207b2e3e09838fe14bff7546ac55cea58\"" Mar 12 01:41:11.539995 containerd[1452]: time="2026-03-12T01:41:11.539861784Z" level=info msg="StartContainer for \"76b2f72f06bcc1358656dc140530422207b2e3e09838fe14bff7546ac55cea58\"" Mar 12 01:41:11.583905 systemd[1]: Started cri-containerd-76b2f72f06bcc1358656dc140530422207b2e3e09838fe14bff7546ac55cea58.scope - libcontainer container 76b2f72f06bcc1358656dc140530422207b2e3e09838fe14bff7546ac55cea58. Mar 12 01:41:11.625210 containerd[1452]: time="2026-03-12T01:41:11.625126060Z" level=info msg="StartContainer for \"76b2f72f06bcc1358656dc140530422207b2e3e09838fe14bff7546ac55cea58\" returns successfully" Mar 12 01:41:11.766309 kubelet[2506]: I0312 01:41:11.766032 2506 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 12 01:41:11.768334 kubelet[2506]: I0312 01:41:11.768242 2506 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 12 01:41:11.975908 kubelet[2506]: I0312 01:41:11.975158 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-p5zx6" podStartSLOduration=14.780533874 podStartE2EDuration="22.97513848s" podCreationTimestamp="2026-03-12 01:40:49 +0000 UTC" firstStartedPulling="2026-03-12 01:41:03.293989885 +0000 UTC m=+31.725445682" lastFinishedPulling="2026-03-12 01:41:11.48859449 +0000 UTC m=+39.920050288" observedRunningTime="2026-03-12 01:41:11.974303483 +0000 UTC m=+40.405759281" watchObservedRunningTime="2026-03-12 01:41:11.97513848 +0000 UTC m=+40.406594279" Mar 12 01:41:11.980216 systemd[1]: run-containerd-runc-k8s.io-76b2f72f06bcc1358656dc140530422207b2e3e09838fe14bff7546ac55cea58-runc.6MRi9h.mount: Deactivated successfully. Mar 12 01:41:12.079910 systemd[1]: Started sshd@7-10.0.0.151:22-10.0.0.1:55708.service - OpenSSH per-connection server daemon (10.0.0.1:55708). Mar 12 01:41:12.174946 sshd[5321]: Accepted publickey for core from 10.0.0.1 port 55708 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:41:12.177752 sshd[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:41:12.183242 systemd-logind[1437]: New session 8 of user core. Mar 12 01:41:12.190973 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 01:41:12.435669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1373771277.mount: Deactivated successfully. Mar 12 01:41:12.465870 containerd[1452]: time="2026-03-12T01:41:12.465793090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:12.483531 containerd[1452]: time="2026-03-12T01:41:12.483368651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 12 01:41:12.485778 containerd[1452]: time="2026-03-12T01:41:12.484975569Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:12.488076 containerd[1452]: time="2026-03-12T01:41:12.487947483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:41:12.489361 containerd[1452]: time="2026-03-12T01:41:12.488784986Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 999.732135ms" Mar 12 01:41:12.489361 containerd[1452]: time="2026-03-12T01:41:12.488817536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 12 01:41:12.497494 containerd[1452]: time="2026-03-12T01:41:12.497416399Z" level=info msg="CreateContainer within sandbox \"e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 12 01:41:12.516536 containerd[1452]: time="2026-03-12T01:41:12.516307122Z" level=info msg="CreateContainer within sandbox \"e15ed48ed87c3a18a5414e9cb24e89fc4335d86fdbe8363db06855e058299f75\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"45e49d180a0c6122964cfd7df96031c296f10a426fa0638f96ac6c88503304c4\"" Mar 12 01:41:12.517816 containerd[1452]: time="2026-03-12T01:41:12.517470009Z" level=info msg="StartContainer for \"45e49d180a0c6122964cfd7df96031c296f10a426fa0638f96ac6c88503304c4\"" Mar 12 01:41:12.554866 systemd[1]: Started cri-containerd-45e49d180a0c6122964cfd7df96031c296f10a426fa0638f96ac6c88503304c4.scope - libcontainer container 45e49d180a0c6122964cfd7df96031c296f10a426fa0638f96ac6c88503304c4. Mar 12 01:41:12.637953 sshd[5321]: pam_unix(sshd:session): session closed for user core Mar 12 01:41:12.643404 systemd[1]: sshd@7-10.0.0.151:22-10.0.0.1:55708.service: Deactivated successfully. Mar 12 01:41:12.647868 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 01:41:12.650843 systemd-logind[1437]: Session 8 logged out. Waiting for processes to exit. Mar 12 01:41:12.653874 systemd-logind[1437]: Removed session 8. Mar 12 01:41:12.657991 containerd[1452]: time="2026-03-12T01:41:12.657928975Z" level=info msg="StartContainer for \"45e49d180a0c6122964cfd7df96031c296f10a426fa0638f96ac6c88503304c4\" returns successfully" Mar 12 01:41:12.980632 kubelet[2506]: I0312 01:41:12.980386 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5b58656d69-dxqf6" podStartSLOduration=1.514575214 podStartE2EDuration="9.980367093s" podCreationTimestamp="2026-03-12 01:41:03 +0000 UTC" firstStartedPulling="2026-03-12 01:41:04.024139694 +0000 UTC m=+32.455595493" lastFinishedPulling="2026-03-12 01:41:12.489931574 +0000 UTC m=+40.921387372" observedRunningTime="2026-03-12 01:41:12.980050854 +0000 UTC m=+41.411506652" watchObservedRunningTime="2026-03-12 01:41:12.980367093 +0000 UTC m=+41.411822921" Mar 12 01:41:13.917090 kubelet[2506]: I0312 01:41:13.916975 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:41:17.651507 systemd[1]: Started sshd@8-10.0.0.151:22-10.0.0.1:55724.service - OpenSSH per-connection server daemon (10.0.0.1:55724). Mar 12 01:41:17.732027 sshd[5443]: Accepted publickey for core from 10.0.0.1 port 55724 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:41:17.735679 sshd[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:41:17.744337 systemd-logind[1437]: New session 9 of user core. Mar 12 01:41:17.754053 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 01:41:17.946518 sshd[5443]: pam_unix(sshd:session): session closed for user core Mar 12 01:41:17.951834 systemd[1]: sshd@8-10.0.0.151:22-10.0.0.1:55724.service: Deactivated successfully. Mar 12 01:41:17.954649 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 01:41:17.956033 systemd-logind[1437]: Session 9 logged out. Waiting for processes to exit. Mar 12 01:41:17.957990 systemd-logind[1437]: Removed session 9. Mar 12 01:41:22.958987 systemd[1]: Started sshd@9-10.0.0.151:22-10.0.0.1:48690.service - OpenSSH per-connection server daemon (10.0.0.1:48690). Mar 12 01:41:23.029046 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 48690 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:41:23.030970 sshd[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:41:23.036693 systemd-logind[1437]: New session 10 of user core. Mar 12 01:41:23.046926 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 01:41:23.191511 sshd[5475]: pam_unix(sshd:session): session closed for user core Mar 12 01:41:23.199840 systemd[1]: sshd@9-10.0.0.151:22-10.0.0.1:48690.service: Deactivated successfully. Mar 12 01:41:23.201694 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 01:41:23.203854 systemd-logind[1437]: Session 10 logged out. Waiting for processes to exit. Mar 12 01:41:23.210291 systemd[1]: Started sshd@10-10.0.0.151:22-10.0.0.1:48692.service - OpenSSH per-connection server daemon (10.0.0.1:48692). Mar 12 01:41:23.211670 systemd-logind[1437]: Removed session 10. Mar 12 01:41:23.239454 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 48692 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:41:23.242306 sshd[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:41:23.253301 systemd-logind[1437]: New session 11 of user core. Mar 12 01:41:23.263012 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 01:41:23.451657 sshd[5490]: pam_unix(sshd:session): session closed for user core Mar 12 01:41:23.461240 systemd[1]: sshd@10-10.0.0.151:22-10.0.0.1:48692.service: Deactivated successfully. Mar 12 01:41:23.465648 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 01:41:23.468343 systemd-logind[1437]: Session 11 logged out. Waiting for processes to exit. Mar 12 01:41:23.475424 systemd[1]: Started sshd@11-10.0.0.151:22-10.0.0.1:48708.service - OpenSSH per-connection server daemon (10.0.0.1:48708). Mar 12 01:41:23.477498 systemd-logind[1437]: Removed session 11. Mar 12 01:41:23.546126 sshd[5502]: Accepted publickey for core from 10.0.0.1 port 48708 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:41:23.549239 sshd[5502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:41:23.555072 systemd-logind[1437]: New session 12 of user core. Mar 12 01:41:23.565922 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 01:41:23.702364 sshd[5502]: pam_unix(sshd:session): session closed for user core Mar 12 01:41:23.707038 systemd[1]: sshd@11-10.0.0.151:22-10.0.0.1:48708.service: Deactivated successfully. Mar 12 01:41:23.709054 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 01:41:23.710045 systemd-logind[1437]: Session 12 logged out. Waiting for processes to exit. Mar 12 01:41:23.711442 systemd-logind[1437]: Removed session 12. Mar 12 01:41:28.731496 systemd[1]: Started sshd@12-10.0.0.151:22-10.0.0.1:48724.service - OpenSSH per-connection server daemon (10.0.0.1:48724). Mar 12 01:41:28.762653 sshd[5532]: Accepted publickey for core from 10.0.0.1 port 48724 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:41:28.765085 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:41:28.772040 systemd-logind[1437]: New session 13 of user core. Mar 12 01:41:28.777230 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 01:41:28.932537 sshd[5532]: pam_unix(sshd:session): session closed for user core Mar 12 01:41:28.936649 systemd[1]: sshd@12-10.0.0.151:22-10.0.0.1:48724.service: Deactivated successfully. Mar 12 01:41:28.939840 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 01:41:28.944109 systemd-logind[1437]: Session 13 logged out. Waiting for processes to exit. Mar 12 01:41:28.946287 systemd-logind[1437]: Removed session 13. Mar 12 01:41:31.679766 containerd[1452]: time="2026-03-12T01:41:31.679640804Z" level=info msg="StopPodSandbox for \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\"" Mar 12 01:41:31.918560 containerd[1452]: 2026-03-12 01:41:31.820 [WARNING][5573] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p5zx6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"82884de9-9bde-4318-9c0a-85cc3f92ad06", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5", Pod:"csi-node-driver-p5zx6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7b2066cfc42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:31.918560 containerd[1452]: 2026-03-12 01:41:31.821 [INFO][5573] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Mar 12 01:41:31.918560 containerd[1452]: 2026-03-12 01:41:31.821 [INFO][5573] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" iface="eth0" netns="" Mar 12 01:41:31.918560 containerd[1452]: 2026-03-12 01:41:31.821 [INFO][5573] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Mar 12 01:41:31.918560 containerd[1452]: 2026-03-12 01:41:31.821 [INFO][5573] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Mar 12 01:41:31.918560 containerd[1452]: 2026-03-12 01:41:31.901 [INFO][5581] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" HandleID="k8s-pod-network.3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Workload="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:31.918560 containerd[1452]: 2026-03-12 01:41:31.902 [INFO][5581] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:31.918560 containerd[1452]: 2026-03-12 01:41:31.902 [INFO][5581] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:31.918560 containerd[1452]: 2026-03-12 01:41:31.910 [WARNING][5581] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" HandleID="k8s-pod-network.3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Workload="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:31.918560 containerd[1452]: 2026-03-12 01:41:31.910 [INFO][5581] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" HandleID="k8s-pod-network.3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Workload="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:31.918560 containerd[1452]: 2026-03-12 01:41:31.912 [INFO][5581] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:31.918560 containerd[1452]: 2026-03-12 01:41:31.915 [INFO][5573] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Mar 12 01:41:31.919171 containerd[1452]: time="2026-03-12T01:41:31.918618505Z" level=info msg="TearDown network for sandbox \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\" successfully" Mar 12 01:41:31.919171 containerd[1452]: time="2026-03-12T01:41:31.918644383Z" level=info msg="StopPodSandbox for \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\" returns successfully" Mar 12 01:41:31.953481 containerd[1452]: time="2026-03-12T01:41:31.953309428Z" level=info msg="RemovePodSandbox for \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\"" Mar 12 01:41:31.956605 containerd[1452]: time="2026-03-12T01:41:31.956465905Z" level=info msg="Forcibly stopping sandbox \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\"" Mar 12 01:41:32.060273 containerd[1452]: 2026-03-12 01:41:32.014 [WARNING][5598] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--p5zx6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"82884de9-9bde-4318-9c0a-85cc3f92ad06", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"88f76ccfcca5b1a0450b75a57de0def4dca06e2dc7c046c10d4c4c705aab0ef5", Pod:"csi-node-driver-p5zx6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7b2066cfc42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:32.060273 containerd[1452]: 2026-03-12 01:41:32.014 [INFO][5598] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Mar 12 01:41:32.060273 containerd[1452]: 2026-03-12 01:41:32.014 [INFO][5598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" iface="eth0" netns="" Mar 12 01:41:32.060273 containerd[1452]: 2026-03-12 01:41:32.014 [INFO][5598] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Mar 12 01:41:32.060273 containerd[1452]: 2026-03-12 01:41:32.014 [INFO][5598] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Mar 12 01:41:32.060273 containerd[1452]: 2026-03-12 01:41:32.044 [INFO][5606] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" HandleID="k8s-pod-network.3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Workload="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:32.060273 containerd[1452]: 2026-03-12 01:41:32.044 [INFO][5606] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:32.060273 containerd[1452]: 2026-03-12 01:41:32.044 [INFO][5606] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:32.060273 containerd[1452]: 2026-03-12 01:41:32.052 [WARNING][5606] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" HandleID="k8s-pod-network.3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Workload="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:32.060273 containerd[1452]: 2026-03-12 01:41:32.052 [INFO][5606] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" HandleID="k8s-pod-network.3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Workload="localhost-k8s-csi--node--driver--p5zx6-eth0" Mar 12 01:41:32.060273 containerd[1452]: 2026-03-12 01:41:32.054 [INFO][5606] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:32.060273 containerd[1452]: 2026-03-12 01:41:32.057 [INFO][5598] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2" Mar 12 01:41:32.060273 containerd[1452]: time="2026-03-12T01:41:32.060231436Z" level=info msg="TearDown network for sandbox \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\" successfully" Mar 12 01:41:32.075680 containerd[1452]: time="2026-03-12T01:41:32.075562842Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:41:32.075680 containerd[1452]: time="2026-03-12T01:41:32.075676995Z" level=info msg="RemovePodSandbox \"3c7864fae11f47c112cd1c189075a93b1b7ebedb5b00b7a2928b092d3f89c7b2\" returns successfully" Mar 12 01:41:32.081267 containerd[1452]: time="2026-03-12T01:41:32.081223936Z" level=info msg="StopPodSandbox for \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\"" Mar 12 01:41:32.169186 containerd[1452]: 2026-03-12 01:41:32.128 [WARNING][5624] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0", GenerateName:"calico-apiserver-59985c4b9f-", Namespace:"calico-system", SelfLink:"", UID:"12faea7f-d095-4822-97da-4521ffa445d4", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59985c4b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b", Pod:"calico-apiserver-59985c4b9f-ccgf4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali355fa62d5e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:32.169186 containerd[1452]: 2026-03-12 01:41:32.128 [INFO][5624] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Mar 12 01:41:32.169186 containerd[1452]: 2026-03-12 01:41:32.128 [INFO][5624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" iface="eth0" netns="" Mar 12 01:41:32.169186 containerd[1452]: 2026-03-12 01:41:32.128 [INFO][5624] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Mar 12 01:41:32.169186 containerd[1452]: 2026-03-12 01:41:32.128 [INFO][5624] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Mar 12 01:41:32.169186 containerd[1452]: 2026-03-12 01:41:32.155 [INFO][5633] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" HandleID="k8s-pod-network.117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Workload="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:32.169186 containerd[1452]: 2026-03-12 01:41:32.155 [INFO][5633] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:32.169186 containerd[1452]: 2026-03-12 01:41:32.155 [INFO][5633] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:32.169186 containerd[1452]: 2026-03-12 01:41:32.162 [WARNING][5633] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" HandleID="k8s-pod-network.117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Workload="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:32.169186 containerd[1452]: 2026-03-12 01:41:32.162 [INFO][5633] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" HandleID="k8s-pod-network.117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Workload="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:32.169186 containerd[1452]: 2026-03-12 01:41:32.163 [INFO][5633] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:32.169186 containerd[1452]: 2026-03-12 01:41:32.166 [INFO][5624] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Mar 12 01:41:32.169782 containerd[1452]: time="2026-03-12T01:41:32.169214499Z" level=info msg="TearDown network for sandbox \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\" successfully" Mar 12 01:41:32.169782 containerd[1452]: time="2026-03-12T01:41:32.169236831Z" level=info msg="StopPodSandbox for \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\" returns successfully" Mar 12 01:41:32.169996 containerd[1452]: time="2026-03-12T01:41:32.169947585Z" level=info msg="RemovePodSandbox for \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\"" Mar 12 01:41:32.169996 containerd[1452]: time="2026-03-12T01:41:32.169971339Z" level=info msg="Forcibly stopping sandbox \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\"" Mar 12 01:41:32.260436 containerd[1452]: 2026-03-12 01:41:32.215 [WARNING][5651] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0", GenerateName:"calico-apiserver-59985c4b9f-", Namespace:"calico-system", SelfLink:"", UID:"12faea7f-d095-4822-97da-4521ffa445d4", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59985c4b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d41c7e51718e3b6bb5ff11cf248a5957d69f567043cec1711819a3a5e0c0e1b", Pod:"calico-apiserver-59985c4b9f-ccgf4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali355fa62d5e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:32.260436 containerd[1452]: 2026-03-12 01:41:32.215 [INFO][5651] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Mar 12 01:41:32.260436 containerd[1452]: 2026-03-12 01:41:32.215 [INFO][5651] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" iface="eth0" netns="" Mar 12 01:41:32.260436 containerd[1452]: 2026-03-12 01:41:32.215 [INFO][5651] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Mar 12 01:41:32.260436 containerd[1452]: 2026-03-12 01:41:32.215 [INFO][5651] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Mar 12 01:41:32.260436 containerd[1452]: 2026-03-12 01:41:32.241 [INFO][5660] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" HandleID="k8s-pod-network.117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Workload="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:32.260436 containerd[1452]: 2026-03-12 01:41:32.241 [INFO][5660] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:32.260436 containerd[1452]: 2026-03-12 01:41:32.241 [INFO][5660] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:32.260436 containerd[1452]: 2026-03-12 01:41:32.251 [WARNING][5660] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" HandleID="k8s-pod-network.117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Workload="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:32.260436 containerd[1452]: 2026-03-12 01:41:32.251 [INFO][5660] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" HandleID="k8s-pod-network.117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Workload="localhost-k8s-calico--apiserver--59985c4b9f--ccgf4-eth0" Mar 12 01:41:32.260436 containerd[1452]: 2026-03-12 01:41:32.253 [INFO][5660] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:32.260436 containerd[1452]: 2026-03-12 01:41:32.256 [INFO][5651] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264" Mar 12 01:41:32.260436 containerd[1452]: time="2026-03-12T01:41:32.260362187Z" level=info msg="TearDown network for sandbox \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\" successfully" Mar 12 01:41:32.270896 containerd[1452]: time="2026-03-12T01:41:32.270796995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:41:32.270979 containerd[1452]: time="2026-03-12T01:41:32.270893986Z" level=info msg="RemovePodSandbox \"117f3bb79665e318a865c3386f7dc05e1b14f6a0802824e45598f20f591eb264\" returns successfully" Mar 12 01:41:32.271561 containerd[1452]: time="2026-03-12T01:41:32.271504032Z" level=info msg="StopPodSandbox for \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\"" Mar 12 01:41:32.371112 containerd[1452]: 2026-03-12 01:41:32.322 [WARNING][5678] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c3461470-30f4-4645-8713-5c638c154c77", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b", Pod:"coredns-674b8bbfcf-ldd4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6877bd29110", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:32.371112 containerd[1452]: 2026-03-12 01:41:32.322 [INFO][5678] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Mar 12 01:41:32.371112 containerd[1452]: 2026-03-12 01:41:32.322 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" iface="eth0" netns="" Mar 12 01:41:32.371112 containerd[1452]: 2026-03-12 01:41:32.322 [INFO][5678] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Mar 12 01:41:32.371112 containerd[1452]: 2026-03-12 01:41:32.322 [INFO][5678] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Mar 12 01:41:32.371112 containerd[1452]: 2026-03-12 01:41:32.356 [INFO][5687] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" HandleID="k8s-pod-network.f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Workload="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:32.371112 containerd[1452]: 2026-03-12 01:41:32.356 [INFO][5687] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:32.371112 containerd[1452]: 2026-03-12 01:41:32.356 [INFO][5687] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:32.371112 containerd[1452]: 2026-03-12 01:41:32.362 [WARNING][5687] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" HandleID="k8s-pod-network.f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Workload="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:32.371112 containerd[1452]: 2026-03-12 01:41:32.362 [INFO][5687] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" HandleID="k8s-pod-network.f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Workload="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:32.371112 containerd[1452]: 2026-03-12 01:41:32.365 [INFO][5687] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:32.371112 containerd[1452]: 2026-03-12 01:41:32.368 [INFO][5678] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Mar 12 01:41:32.371832 containerd[1452]: time="2026-03-12T01:41:32.371204750Z" level=info msg="TearDown network for sandbox \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\" successfully" Mar 12 01:41:32.371832 containerd[1452]: time="2026-03-12T01:41:32.371227502Z" level=info msg="StopPodSandbox for \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\" returns successfully" Mar 12 01:41:32.371971 containerd[1452]: time="2026-03-12T01:41:32.371937692Z" level=info msg="RemovePodSandbox for \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\"" Mar 12 01:41:32.371971 containerd[1452]: time="2026-03-12T01:41:32.371963681Z" level=info msg="Forcibly stopping sandbox \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\"" Mar 12 01:41:32.472662 containerd[1452]: 2026-03-12 01:41:32.425 [WARNING][5704] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c3461470-30f4-4645-8713-5c638c154c77", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf70d6cde61e133c36c64e7f4bfa16980fbb35270cd18a35d47dd3bce8ebd92b", Pod:"coredns-674b8bbfcf-ldd4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6877bd29110", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:32.472662 containerd[1452]: 2026-03-12 01:41:32.426 [INFO][5704] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Mar 12 01:41:32.472662 containerd[1452]: 2026-03-12 01:41:32.426 [INFO][5704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" iface="eth0" netns="" Mar 12 01:41:32.472662 containerd[1452]: 2026-03-12 01:41:32.426 [INFO][5704] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Mar 12 01:41:32.472662 containerd[1452]: 2026-03-12 01:41:32.426 [INFO][5704] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Mar 12 01:41:32.472662 containerd[1452]: 2026-03-12 01:41:32.456 [INFO][5712] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" HandleID="k8s-pod-network.f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Workload="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:32.472662 containerd[1452]: 2026-03-12 01:41:32.456 [INFO][5712] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:32.472662 containerd[1452]: 2026-03-12 01:41:32.456 [INFO][5712] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:32.472662 containerd[1452]: 2026-03-12 01:41:32.464 [WARNING][5712] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" HandleID="k8s-pod-network.f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Workload="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:32.472662 containerd[1452]: 2026-03-12 01:41:32.464 [INFO][5712] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" HandleID="k8s-pod-network.f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Workload="localhost-k8s-coredns--674b8bbfcf--ldd4z-eth0" Mar 12 01:41:32.472662 containerd[1452]: 2026-03-12 01:41:32.466 [INFO][5712] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:32.472662 containerd[1452]: 2026-03-12 01:41:32.469 [INFO][5704] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0" Mar 12 01:41:32.472662 containerd[1452]: time="2026-03-12T01:41:32.472536940Z" level=info msg="TearDown network for sandbox \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\" successfully" Mar 12 01:41:32.483020 containerd[1452]: time="2026-03-12T01:41:32.482983040Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:41:32.483066 containerd[1452]: time="2026-03-12T01:41:32.483049093Z" level=info msg="RemovePodSandbox \"f8fc731459665a6388119fc4065ace137ed2b5327c008a3dd262efae245622c0\" returns successfully" Mar 12 01:41:32.483574 containerd[1452]: time="2026-03-12T01:41:32.483540305Z" level=info msg="StopPodSandbox for \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\"" Mar 12 01:41:32.573254 containerd[1452]: 2026-03-12 01:41:32.528 [WARNING][5730] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--srzp7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5f897c92-34a7-46c6-8c17-18c4147d9a5f", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a", Pod:"coredns-674b8bbfcf-srzp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41ef96b4cbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:32.573254 containerd[1452]: 2026-03-12 01:41:32.528 [INFO][5730] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Mar 12 01:41:32.573254 containerd[1452]: 2026-03-12 01:41:32.528 [INFO][5730] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" iface="eth0" netns="" Mar 12 01:41:32.573254 containerd[1452]: 2026-03-12 01:41:32.528 [INFO][5730] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Mar 12 01:41:32.573254 containerd[1452]: 2026-03-12 01:41:32.528 [INFO][5730] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Mar 12 01:41:32.573254 containerd[1452]: 2026-03-12 01:41:32.558 [INFO][5739] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" HandleID="k8s-pod-network.311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Workload="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:32.573254 containerd[1452]: 2026-03-12 01:41:32.558 [INFO][5739] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:32.573254 containerd[1452]: 2026-03-12 01:41:32.558 [INFO][5739] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:32.573254 containerd[1452]: 2026-03-12 01:41:32.565 [WARNING][5739] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" HandleID="k8s-pod-network.311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Workload="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:32.573254 containerd[1452]: 2026-03-12 01:41:32.565 [INFO][5739] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" HandleID="k8s-pod-network.311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Workload="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:32.573254 containerd[1452]: 2026-03-12 01:41:32.567 [INFO][5739] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:32.573254 containerd[1452]: 2026-03-12 01:41:32.570 [INFO][5730] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Mar 12 01:41:32.573254 containerd[1452]: time="2026-03-12T01:41:32.573203019Z" level=info msg="TearDown network for sandbox \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\" successfully" Mar 12 01:41:32.573254 containerd[1452]: time="2026-03-12T01:41:32.573227605Z" level=info msg="StopPodSandbox for \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\" returns successfully" Mar 12 01:41:32.573935 containerd[1452]: time="2026-03-12T01:41:32.573854898Z" level=info msg="RemovePodSandbox for \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\"" Mar 12 01:41:32.573935 containerd[1452]: time="2026-03-12T01:41:32.573878482Z" level=info msg="Forcibly stopping sandbox \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\"" Mar 12 01:41:32.680184 containerd[1452]: 2026-03-12 01:41:32.627 [WARNING][5756] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--srzp7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5f897c92-34a7-46c6-8c17-18c4147d9a5f", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4cd1979b9fbdede196775d933cc18bb89082aacb791232d44dc62790b5e9415a", Pod:"coredns-674b8bbfcf-srzp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41ef96b4cbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:32.680184 containerd[1452]: 2026-03-12 01:41:32.627 [INFO][5756] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Mar 12 01:41:32.680184 containerd[1452]: 2026-03-12 01:41:32.627 [INFO][5756] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" iface="eth0" netns="" Mar 12 01:41:32.680184 containerd[1452]: 2026-03-12 01:41:32.627 [INFO][5756] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Mar 12 01:41:32.680184 containerd[1452]: 2026-03-12 01:41:32.627 [INFO][5756] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Mar 12 01:41:32.680184 containerd[1452]: 2026-03-12 01:41:32.662 [INFO][5765] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" HandleID="k8s-pod-network.311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Workload="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:32.680184 containerd[1452]: 2026-03-12 01:41:32.663 [INFO][5765] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:32.680184 containerd[1452]: 2026-03-12 01:41:32.663 [INFO][5765] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:32.680184 containerd[1452]: 2026-03-12 01:41:32.671 [WARNING][5765] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" HandleID="k8s-pod-network.311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Workload="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:32.680184 containerd[1452]: 2026-03-12 01:41:32.672 [INFO][5765] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" HandleID="k8s-pod-network.311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Workload="localhost-k8s-coredns--674b8bbfcf--srzp7-eth0" Mar 12 01:41:32.680184 containerd[1452]: 2026-03-12 01:41:32.674 [INFO][5765] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:32.680184 containerd[1452]: 2026-03-12 01:41:32.677 [INFO][5756] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130" Mar 12 01:41:32.681060 containerd[1452]: time="2026-03-12T01:41:32.680185209Z" level=info msg="TearDown network for sandbox \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\" successfully" Mar 12 01:41:32.686096 containerd[1452]: time="2026-03-12T01:41:32.685909524Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:41:32.686096 containerd[1452]: time="2026-03-12T01:41:32.686013457Z" level=info msg="RemovePodSandbox \"311a472fcf1235aa3a42ce2f63599d1d26c057fa4a92ce3fea2f3215975eb130\" returns successfully" Mar 12 01:41:32.692119 containerd[1452]: time="2026-03-12T01:41:32.692033907Z" level=info msg="StopPodSandbox for \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\"" Mar 12 01:41:32.800650 containerd[1452]: 2026-03-12 01:41:32.747 [WARNING][5783] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--5mp72-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"fc846323-168d-4e27-8031-2f3b65d5b16e", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57", Pod:"goldmane-5b85766d88-5mp72", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic779f0ed5d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:32.800650 containerd[1452]: 2026-03-12 01:41:32.748 [INFO][5783] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Mar 12 01:41:32.800650 containerd[1452]: 2026-03-12 01:41:32.748 [INFO][5783] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" iface="eth0" netns="" Mar 12 01:41:32.800650 containerd[1452]: 2026-03-12 01:41:32.748 [INFO][5783] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Mar 12 01:41:32.800650 containerd[1452]: 2026-03-12 01:41:32.748 [INFO][5783] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Mar 12 01:41:32.800650 containerd[1452]: 2026-03-12 01:41:32.786 [INFO][5792] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" HandleID="k8s-pod-network.39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Workload="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:32.800650 containerd[1452]: 2026-03-12 01:41:32.786 [INFO][5792] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:32.800650 containerd[1452]: 2026-03-12 01:41:32.786 [INFO][5792] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:32.800650 containerd[1452]: 2026-03-12 01:41:32.792 [WARNING][5792] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" HandleID="k8s-pod-network.39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Workload="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:32.800650 containerd[1452]: 2026-03-12 01:41:32.792 [INFO][5792] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" HandleID="k8s-pod-network.39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Workload="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:32.800650 containerd[1452]: 2026-03-12 01:41:32.795 [INFO][5792] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:32.800650 containerd[1452]: 2026-03-12 01:41:32.798 [INFO][5783] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Mar 12 01:41:32.800650 containerd[1452]: time="2026-03-12T01:41:32.800538851Z" level=info msg="TearDown network for sandbox \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\" successfully" Mar 12 01:41:32.800650 containerd[1452]: time="2026-03-12T01:41:32.800567916Z" level=info msg="StopPodSandbox for \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\" returns successfully" Mar 12 01:41:32.801539 containerd[1452]: time="2026-03-12T01:41:32.801339629Z" level=info msg="RemovePodSandbox for \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\"" Mar 12 01:41:32.801539 containerd[1452]: time="2026-03-12T01:41:32.801372620Z" level=info msg="Forcibly stopping sandbox \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\"" Mar 12 01:41:32.892416 containerd[1452]: 2026-03-12 01:41:32.844 [WARNING][5810] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--5mp72-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"fc846323-168d-4e27-8031-2f3b65d5b16e", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a00df68a6d75825656d78577a3cf484251215f84d96ab03cc927f724389f3f57", Pod:"goldmane-5b85766d88-5mp72", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic779f0ed5d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:32.892416 containerd[1452]: 2026-03-12 01:41:32.844 [INFO][5810] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Mar 12 01:41:32.892416 containerd[1452]: 2026-03-12 01:41:32.845 [INFO][5810] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" iface="eth0" netns="" Mar 12 01:41:32.892416 containerd[1452]: 2026-03-12 01:41:32.845 [INFO][5810] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Mar 12 01:41:32.892416 containerd[1452]: 2026-03-12 01:41:32.845 [INFO][5810] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Mar 12 01:41:32.892416 containerd[1452]: 2026-03-12 01:41:32.876 [INFO][5820] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" HandleID="k8s-pod-network.39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Workload="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:32.892416 containerd[1452]: 2026-03-12 01:41:32.876 [INFO][5820] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:32.892416 containerd[1452]: 2026-03-12 01:41:32.876 [INFO][5820] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:32.892416 containerd[1452]: 2026-03-12 01:41:32.883 [WARNING][5820] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" HandleID="k8s-pod-network.39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Workload="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:32.892416 containerd[1452]: 2026-03-12 01:41:32.883 [INFO][5820] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" HandleID="k8s-pod-network.39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Workload="localhost-k8s-goldmane--5b85766d88--5mp72-eth0" Mar 12 01:41:32.892416 containerd[1452]: 2026-03-12 01:41:32.886 [INFO][5820] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:32.892416 containerd[1452]: 2026-03-12 01:41:32.889 [INFO][5810] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32" Mar 12 01:41:32.892416 containerd[1452]: time="2026-03-12T01:41:32.892338528Z" level=info msg="TearDown network for sandbox \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\" successfully" Mar 12 01:41:32.904275 containerd[1452]: time="2026-03-12T01:41:32.904164290Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:41:32.904275 containerd[1452]: time="2026-03-12T01:41:32.904241143Z" level=info msg="RemovePodSandbox \"39e10cc15461df6ca4f9e7760fef493733eb36421e44e271084242ddb0fb3f32\" returns successfully" Mar 12 01:41:32.904823 containerd[1452]: time="2026-03-12T01:41:32.904761837Z" level=info msg="StopPodSandbox for \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\"" Mar 12 01:41:32.987499 containerd[1452]: 2026-03-12 01:41:32.948 [WARNING][5838] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0", GenerateName:"calico-apiserver-59985c4b9f-", Namespace:"calico-system", SelfLink:"", UID:"0c627ae3-0e93-40e3-80e0-9ad9ac70f824", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59985c4b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f", Pod:"calico-apiserver-59985c4b9f-jlw2l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib6a115484ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:32.987499 containerd[1452]: 2026-03-12 01:41:32.949 [INFO][5838] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Mar 12 01:41:32.987499 containerd[1452]: 2026-03-12 01:41:32.949 [INFO][5838] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" iface="eth0" netns="" Mar 12 01:41:32.987499 containerd[1452]: 2026-03-12 01:41:32.949 [INFO][5838] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Mar 12 01:41:32.987499 containerd[1452]: 2026-03-12 01:41:32.949 [INFO][5838] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Mar 12 01:41:32.987499 containerd[1452]: 2026-03-12 01:41:32.973 [INFO][5848] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" HandleID="k8s-pod-network.0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Workload="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:32.987499 containerd[1452]: 2026-03-12 01:41:32.973 [INFO][5848] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:32.987499 containerd[1452]: 2026-03-12 01:41:32.973 [INFO][5848] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:32.987499 containerd[1452]: 2026-03-12 01:41:32.980 [WARNING][5848] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" HandleID="k8s-pod-network.0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Workload="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:32.987499 containerd[1452]: 2026-03-12 01:41:32.980 [INFO][5848] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" HandleID="k8s-pod-network.0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Workload="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:32.987499 containerd[1452]: 2026-03-12 01:41:32.982 [INFO][5848] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:32.987499 containerd[1452]: 2026-03-12 01:41:32.984 [INFO][5838] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Mar 12 01:41:32.987499 containerd[1452]: time="2026-03-12T01:41:32.987496773Z" level=info msg="TearDown network for sandbox \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\" successfully" Mar 12 01:41:32.988050 containerd[1452]: time="2026-03-12T01:41:32.987518884Z" level=info msg="StopPodSandbox for \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\" returns successfully" Mar 12 01:41:32.988645 containerd[1452]: time="2026-03-12T01:41:32.988458899Z" level=info msg="RemovePodSandbox for \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\"" Mar 12 01:41:32.988694 containerd[1452]: time="2026-03-12T01:41:32.988679340Z" level=info msg="Forcibly stopping sandbox \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\"" Mar 12 01:41:33.071372 containerd[1452]: 2026-03-12 01:41:33.031 [WARNING][5866] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0", GenerateName:"calico-apiserver-59985c4b9f-", Namespace:"calico-system", SelfLink:"", UID:"0c627ae3-0e93-40e3-80e0-9ad9ac70f824", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59985c4b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d6576c587e835d8783d248635acc1c223e40b8c6ade01326a6cd522cfb8814f", Pod:"calico-apiserver-59985c4b9f-jlw2l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib6a115484ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:33.071372 containerd[1452]: 2026-03-12 01:41:33.031 [INFO][5866] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Mar 12 01:41:33.071372 containerd[1452]: 2026-03-12 01:41:33.031 [INFO][5866] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" iface="eth0" netns="" Mar 12 01:41:33.071372 containerd[1452]: 2026-03-12 01:41:33.031 [INFO][5866] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Mar 12 01:41:33.071372 containerd[1452]: 2026-03-12 01:41:33.031 [INFO][5866] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Mar 12 01:41:33.071372 containerd[1452]: 2026-03-12 01:41:33.058 [INFO][5875] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" HandleID="k8s-pod-network.0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Workload="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:33.071372 containerd[1452]: 2026-03-12 01:41:33.059 [INFO][5875] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:33.071372 containerd[1452]: 2026-03-12 01:41:33.059 [INFO][5875] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:33.071372 containerd[1452]: 2026-03-12 01:41:33.064 [WARNING][5875] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" HandleID="k8s-pod-network.0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Workload="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:33.071372 containerd[1452]: 2026-03-12 01:41:33.065 [INFO][5875] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" HandleID="k8s-pod-network.0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Workload="localhost-k8s-calico--apiserver--59985c4b9f--jlw2l-eth0" Mar 12 01:41:33.071372 containerd[1452]: 2026-03-12 01:41:33.066 [INFO][5875] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:33.071372 containerd[1452]: 2026-03-12 01:41:33.068 [INFO][5866] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0" Mar 12 01:41:33.072323 containerd[1452]: time="2026-03-12T01:41:33.071386282Z" level=info msg="TearDown network for sandbox \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\" successfully" Mar 12 01:41:33.078786 containerd[1452]: time="2026-03-12T01:41:33.078661674Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:41:33.078830 containerd[1452]: time="2026-03-12T01:41:33.078815911Z" level=info msg="RemovePodSandbox \"0444e8ceffa8e51c7adb9c1911f29707fcdd1e08eb7c2f0be0676a68f41e6ae0\" returns successfully" Mar 12 01:41:33.079483 containerd[1452]: time="2026-03-12T01:41:33.079376297Z" level=info msg="StopPodSandbox for \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\"" Mar 12 01:41:33.175627 containerd[1452]: 2026-03-12 01:41:33.123 [WARNING][5892] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0", GenerateName:"calico-kube-controllers-549d945fd5-", Namespace:"calico-system", SelfLink:"", UID:"e4a33aa0-00da-4a2a-8056-f0a9ced024aa", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"549d945fd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda", Pod:"calico-kube-controllers-549d945fd5-vwvh9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali19fc59a31f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:33.175627 containerd[1452]: 2026-03-12 01:41:33.124 [INFO][5892] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Mar 12 01:41:33.175627 containerd[1452]: 2026-03-12 01:41:33.124 [INFO][5892] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" iface="eth0" netns="" Mar 12 01:41:33.175627 containerd[1452]: 2026-03-12 01:41:33.124 [INFO][5892] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Mar 12 01:41:33.175627 containerd[1452]: 2026-03-12 01:41:33.124 [INFO][5892] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Mar 12 01:41:33.175627 containerd[1452]: 2026-03-12 01:41:33.160 [INFO][5901] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" HandleID="k8s-pod-network.c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Workload="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:33.175627 containerd[1452]: 2026-03-12 01:41:33.160 [INFO][5901] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:33.175627 containerd[1452]: 2026-03-12 01:41:33.160 [INFO][5901] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:33.175627 containerd[1452]: 2026-03-12 01:41:33.167 [WARNING][5901] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" HandleID="k8s-pod-network.c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Workload="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:33.175627 containerd[1452]: 2026-03-12 01:41:33.167 [INFO][5901] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" HandleID="k8s-pod-network.c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Workload="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:33.175627 containerd[1452]: 2026-03-12 01:41:33.169 [INFO][5901] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:33.175627 containerd[1452]: 2026-03-12 01:41:33.172 [INFO][5892] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Mar 12 01:41:33.176201 containerd[1452]: time="2026-03-12T01:41:33.175628757Z" level=info msg="TearDown network for sandbox \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\" successfully" Mar 12 01:41:33.176201 containerd[1452]: time="2026-03-12T01:41:33.175653352Z" level=info msg="StopPodSandbox for \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\" returns successfully" Mar 12 01:41:33.176643 containerd[1452]: time="2026-03-12T01:41:33.176569865Z" level=info msg="RemovePodSandbox for \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\"" Mar 12 01:41:33.176694 containerd[1452]: time="2026-03-12T01:41:33.176650215Z" level=info msg="Forcibly stopping sandbox \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\"" Mar 12 01:41:33.281435 containerd[1452]: 2026-03-12 01:41:33.230 [WARNING][5919] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0", GenerateName:"calico-kube-controllers-549d945fd5-", Namespace:"calico-system", SelfLink:"", UID:"e4a33aa0-00da-4a2a-8056-f0a9ced024aa", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 40, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"549d945fd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"973b438acad4f120f57f1e375b80d0164712415e0bd873777bdf3f77fa27cbda", Pod:"calico-kube-controllers-549d945fd5-vwvh9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali19fc59a31f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:41:33.281435 containerd[1452]: 2026-03-12 01:41:33.231 [INFO][5919] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Mar 12 01:41:33.281435 containerd[1452]: 2026-03-12 01:41:33.231 [INFO][5919] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" iface="eth0" netns="" Mar 12 01:41:33.281435 containerd[1452]: 2026-03-12 01:41:33.231 [INFO][5919] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Mar 12 01:41:33.281435 containerd[1452]: 2026-03-12 01:41:33.231 [INFO][5919] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Mar 12 01:41:33.281435 containerd[1452]: 2026-03-12 01:41:33.264 [INFO][5927] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" HandleID="k8s-pod-network.c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Workload="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:33.281435 containerd[1452]: 2026-03-12 01:41:33.264 [INFO][5927] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:33.281435 containerd[1452]: 2026-03-12 01:41:33.264 [INFO][5927] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:33.281435 containerd[1452]: 2026-03-12 01:41:33.272 [WARNING][5927] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" HandleID="k8s-pod-network.c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Workload="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:33.281435 containerd[1452]: 2026-03-12 01:41:33.272 [INFO][5927] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" HandleID="k8s-pod-network.c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Workload="localhost-k8s-calico--kube--controllers--549d945fd5--vwvh9-eth0" Mar 12 01:41:33.281435 containerd[1452]: 2026-03-12 01:41:33.275 [INFO][5927] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:33.281435 containerd[1452]: 2026-03-12 01:41:33.278 [INFO][5919] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47" Mar 12 01:41:33.282073 containerd[1452]: time="2026-03-12T01:41:33.281431779Z" level=info msg="TearDown network for sandbox \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\" successfully" Mar 12 01:41:33.287247 containerd[1452]: time="2026-03-12T01:41:33.287128638Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:41:33.287247 containerd[1452]: time="2026-03-12T01:41:33.287213587Z" level=info msg="RemovePodSandbox \"c931bb5c9f1a5d6f2bfe6415538e0ef3c571a1c60a95b807a258f41163b29c47\" returns successfully" Mar 12 01:41:33.288056 containerd[1452]: time="2026-03-12T01:41:33.287976576Z" level=info msg="StopPodSandbox for \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\"" Mar 12 01:41:33.382399 containerd[1452]: 2026-03-12 01:41:33.337 [WARNING][5945] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" WorkloadEndpoint="localhost-k8s-whisker--68964b7876--m247j-eth0" Mar 12 01:41:33.382399 containerd[1452]: 2026-03-12 01:41:33.337 [INFO][5945] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Mar 12 01:41:33.382399 containerd[1452]: 2026-03-12 01:41:33.337 [INFO][5945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" iface="eth0" netns="" Mar 12 01:41:33.382399 containerd[1452]: 2026-03-12 01:41:33.337 [INFO][5945] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Mar 12 01:41:33.382399 containerd[1452]: 2026-03-12 01:41:33.337 [INFO][5945] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Mar 12 01:41:33.382399 containerd[1452]: 2026-03-12 01:41:33.364 [INFO][5953] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" HandleID="k8s-pod-network.c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Workload="localhost-k8s-whisker--68964b7876--m247j-eth0" Mar 12 01:41:33.382399 containerd[1452]: 2026-03-12 01:41:33.364 [INFO][5953] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:33.382399 containerd[1452]: 2026-03-12 01:41:33.364 [INFO][5953] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:33.382399 containerd[1452]: 2026-03-12 01:41:33.374 [WARNING][5953] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" HandleID="k8s-pod-network.c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Workload="localhost-k8s-whisker--68964b7876--m247j-eth0" Mar 12 01:41:33.382399 containerd[1452]: 2026-03-12 01:41:33.374 [INFO][5953] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" HandleID="k8s-pod-network.c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Workload="localhost-k8s-whisker--68964b7876--m247j-eth0" Mar 12 01:41:33.382399 containerd[1452]: 2026-03-12 01:41:33.377 [INFO][5953] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:33.382399 containerd[1452]: 2026-03-12 01:41:33.379 [INFO][5945] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Mar 12 01:41:33.382848 containerd[1452]: time="2026-03-12T01:41:33.382439580Z" level=info msg="TearDown network for sandbox \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\" successfully" Mar 12 01:41:33.382848 containerd[1452]: time="2026-03-12T01:41:33.382465950Z" level=info msg="StopPodSandbox for \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\" returns successfully" Mar 12 01:41:33.383100 containerd[1452]: time="2026-03-12T01:41:33.383059243Z" level=info msg="RemovePodSandbox for \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\"" Mar 12 01:41:33.383159 containerd[1452]: time="2026-03-12T01:41:33.383107843Z" level=info msg="Forcibly stopping sandbox \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\"" Mar 12 01:41:33.476869 containerd[1452]: 2026-03-12 01:41:33.429 [WARNING][5971] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" WorkloadEndpoint="localhost-k8s-whisker--68964b7876--m247j-eth0" Mar 12 01:41:33.476869 containerd[1452]: 2026-03-12 01:41:33.429 [INFO][5971] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Mar 12 01:41:33.476869 containerd[1452]: 2026-03-12 01:41:33.429 [INFO][5971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" iface="eth0" netns="" Mar 12 01:41:33.476869 containerd[1452]: 2026-03-12 01:41:33.429 [INFO][5971] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Mar 12 01:41:33.476869 containerd[1452]: 2026-03-12 01:41:33.429 [INFO][5971] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Mar 12 01:41:33.476869 containerd[1452]: 2026-03-12 01:41:33.459 [INFO][5979] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" HandleID="k8s-pod-network.c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Workload="localhost-k8s-whisker--68964b7876--m247j-eth0" Mar 12 01:41:33.476869 containerd[1452]: 2026-03-12 01:41:33.460 [INFO][5979] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:41:33.476869 containerd[1452]: 2026-03-12 01:41:33.460 [INFO][5979] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:41:33.476869 containerd[1452]: 2026-03-12 01:41:33.467 [WARNING][5979] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" HandleID="k8s-pod-network.c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Workload="localhost-k8s-whisker--68964b7876--m247j-eth0" Mar 12 01:41:33.476869 containerd[1452]: 2026-03-12 01:41:33.468 [INFO][5979] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" HandleID="k8s-pod-network.c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Workload="localhost-k8s-whisker--68964b7876--m247j-eth0" Mar 12 01:41:33.476869 containerd[1452]: 2026-03-12 01:41:33.470 [INFO][5979] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:41:33.476869 containerd[1452]: 2026-03-12 01:41:33.473 [INFO][5971] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca" Mar 12 01:41:33.476869 containerd[1452]: time="2026-03-12T01:41:33.476648629Z" level=info msg="TearDown network for sandbox \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\" successfully" Mar 12 01:41:33.484637 containerd[1452]: time="2026-03-12T01:41:33.484546984Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:41:33.484798 containerd[1452]: time="2026-03-12T01:41:33.484668360Z" level=info msg="RemovePodSandbox \"c475f71ada6d675e09a2dbc995f30c2825660c3ef361b65494d3fdcbe4ab63ca\" returns successfully" Mar 12 01:41:33.952204 systemd[1]: Started sshd@13-10.0.0.151:22-10.0.0.1:52164.service - OpenSSH per-connection server daemon (10.0.0.1:52164). Mar 12 01:41:34.036472 sshd[6005]: Accepted publickey for core from 10.0.0.1 port 52164 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:41:34.039914 sshd[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:41:34.046299 systemd-logind[1437]: New session 14 of user core. Mar 12 01:41:34.056030 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 01:41:34.226678 sshd[6005]: pam_unix(sshd:session): session closed for user core Mar 12 01:41:34.240925 systemd[1]: sshd@13-10.0.0.151:22-10.0.0.1:52164.service: Deactivated successfully. Mar 12 01:41:34.243011 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 01:41:34.245035 systemd-logind[1437]: Session 14 logged out. Waiting for processes to exit. Mar 12 01:41:34.253238 systemd[1]: Started sshd@14-10.0.0.151:22-10.0.0.1:52178.service - OpenSSH per-connection server daemon (10.0.0.1:52178). Mar 12 01:41:34.254925 systemd-logind[1437]: Removed session 14. Mar 12 01:41:34.286152 sshd[6026]: Accepted publickey for core from 10.0.0.1 port 52178 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:41:34.288273 sshd[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:41:34.294418 systemd-logind[1437]: New session 15 of user core. Mar 12 01:41:34.303970 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 01:41:34.604455 sshd[6026]: pam_unix(sshd:session): session closed for user core Mar 12 01:41:34.613267 systemd[1]: sshd@14-10.0.0.151:22-10.0.0.1:52178.service: Deactivated successfully. Mar 12 01:41:34.615412 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 01:41:34.617952 systemd-logind[1437]: Session 15 logged out. Waiting for processes to exit. Mar 12 01:41:34.624184 systemd[1]: Started sshd@15-10.0.0.151:22-10.0.0.1:52182.service - OpenSSH per-connection server daemon (10.0.0.1:52182). Mar 12 01:41:34.625816 systemd-logind[1437]: Removed session 15. Mar 12 01:41:34.672933 sshd[6038]: Accepted publickey for core from 10.0.0.1 port 52182 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:41:34.675208 sshd[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:41:34.680788 systemd-logind[1437]: New session 16 of user core. Mar 12 01:41:34.689953 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 01:41:35.331670 sshd[6038]: pam_unix(sshd:session): session closed for user core Mar 12 01:41:35.344082 systemd[1]: sshd@15-10.0.0.151:22-10.0.0.1:52182.service: Deactivated successfully. Mar 12 01:41:35.346140 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 01:41:35.349242 systemd-logind[1437]: Session 16 logged out. Waiting for processes to exit. Mar 12 01:41:35.363876 systemd[1]: Started sshd@16-10.0.0.151:22-10.0.0.1:52186.service - OpenSSH per-connection server daemon (10.0.0.1:52186). Mar 12 01:41:35.367425 systemd-logind[1437]: Removed session 16. Mar 12 01:41:35.423223 sshd[6065]: Accepted publickey for core from 10.0.0.1 port 52186 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:41:35.425444 sshd[6065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:41:35.432691 systemd-logind[1437]: New session 17 of user core. Mar 12 01:41:35.439887 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 01:41:35.764490 sshd[6065]: pam_unix(sshd:session): session closed for user core Mar 12 01:41:35.773560 systemd[1]: sshd@16-10.0.0.151:22-10.0.0.1:52186.service: Deactivated successfully. Mar 12 01:41:35.775659 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 01:41:35.777016 systemd-logind[1437]: Session 17 logged out. Waiting for processes to exit. Mar 12 01:41:35.788402 systemd[1]: Started sshd@17-10.0.0.151:22-10.0.0.1:52188.service - OpenSSH per-connection server daemon (10.0.0.1:52188). Mar 12 01:41:35.792576 systemd-logind[1437]: Removed session 17. Mar 12 01:41:35.836298 sshd[6078]: Accepted publickey for core from 10.0.0.1 port 52188 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:41:35.840427 sshd[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:41:35.848276 systemd-logind[1437]: New session 18 of user core. Mar 12 01:41:35.863147 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 01:41:36.004055 sshd[6078]: pam_unix(sshd:session): session closed for user core Mar 12 01:41:36.008234 systemd[1]: sshd@17-10.0.0.151:22-10.0.0.1:52188.service: Deactivated successfully. Mar 12 01:41:36.010573 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 01:41:36.011418 systemd-logind[1437]: Session 18 logged out. Waiting for processes to exit. Mar 12 01:41:36.013197 systemd-logind[1437]: Removed session 18. Mar 12 01:41:41.019774 systemd[1]: Started sshd@18-10.0.0.151:22-10.0.0.1:40394.service - OpenSSH per-connection server daemon (10.0.0.1:40394). Mar 12 01:41:41.052258 sshd[6118]: Accepted publickey for core from 10.0.0.1 port 40394 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:41:41.053981 sshd[6118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:41:41.058321 systemd-logind[1437]: New session 19 of user core. Mar 12 01:41:41.071874 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 01:41:41.204360 sshd[6118]: pam_unix(sshd:session): session closed for user core Mar 12 01:41:41.208303 systemd[1]: sshd@18-10.0.0.151:22-10.0.0.1:40394.service: Deactivated successfully. Mar 12 01:41:41.210585 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 01:41:41.211516 systemd-logind[1437]: Session 19 logged out. Waiting for processes to exit. Mar 12 01:41:41.212865 systemd-logind[1437]: Removed session 19. Mar 12 01:41:41.601503 kubelet[2506]: I0312 01:41:41.601436 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:41:46.218823 systemd[1]: Started sshd@19-10.0.0.151:22-10.0.0.1:40398.service - OpenSSH per-connection server daemon (10.0.0.1:40398). Mar 12 01:41:46.277558 sshd[6178]: Accepted publickey for core from 10.0.0.1 port 40398 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:41:46.279874 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:41:46.284186 systemd-logind[1437]: New session 20 of user core. Mar 12 01:41:46.293978 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 01:41:46.467129 sshd[6178]: pam_unix(sshd:session): session closed for user core Mar 12 01:41:46.470475 systemd[1]: sshd@19-10.0.0.151:22-10.0.0.1:40398.service: Deactivated successfully. Mar 12 01:41:46.472513 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 01:41:46.474673 systemd-logind[1437]: Session 20 logged out. Waiting for processes to exit. Mar 12 01:41:46.476427 systemd-logind[1437]: Removed session 20. Mar 12 01:41:49.239304 kubelet[2506]: I0312 01:41:49.239205 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:41:51.501342 systemd[1]: Started sshd@20-10.0.0.151:22-10.0.0.1:37832.service - OpenSSH per-connection server daemon (10.0.0.1:37832). Mar 12 01:41:51.553633 sshd[6230]: Accepted publickey for core from 10.0.0.1 port 37832 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:41:51.556385 sshd[6230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:41:51.567115 systemd-logind[1437]: New session 21 of user core. Mar 12 01:41:51.576088 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 01:41:51.909750 sshd[6230]: pam_unix(sshd:session): session closed for user core Mar 12 01:41:51.917488 systemd[1]: sshd@20-10.0.0.151:22-10.0.0.1:37832.service: Deactivated successfully. Mar 12 01:41:51.920429 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 01:41:51.923817 systemd-logind[1437]: Session 21 logged out. Waiting for processes to exit. Mar 12 01:41:51.925545 systemd-logind[1437]: Removed session 21.