Mar 12 01:36:36.120203 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Mar 11 23:23:33 -00 2026 Mar 12 01:36:36.120224 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:36:36.120236 kernel: BIOS-provided physical RAM map: Mar 12 01:36:36.120242 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 12 01:36:36.120247 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 12 01:36:36.120252 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 12 01:36:36.120259 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 12 01:36:36.120264 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 12 01:36:36.120270 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 12 01:36:36.120275 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 12 01:36:36.120283 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 12 01:36:36.120288 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 12 01:36:36.120293 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 12 01:36:36.120299 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 12 01:36:36.120305 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 12 01:36:36.120311 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 12 01:36:36.120319 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 12 01:36:36.120325 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 12 01:36:36.120331 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 12 01:36:36.120336 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 12 01:36:36.120342 kernel: NX (Execute Disable) protection: active Mar 12 01:36:36.120347 kernel: APIC: Static calls initialized Mar 12 01:36:36.120353 kernel: efi: EFI v2.7 by EDK II Mar 12 01:36:36.120359 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 12 01:36:36.120417 kernel: SMBIOS 2.8 present. Mar 12 01:36:36.120423 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 12 01:36:36.120429 kernel: Hypervisor detected: KVM Mar 12 01:36:36.120440 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 12 01:36:36.120446 kernel: kvm-clock: using sched offset of 6722723178 cycles Mar 12 01:36:36.120452 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 01:36:36.120458 kernel: tsc: Detected 2445.424 MHz processor Mar 12 01:36:36.120464 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 12 01:36:36.120470 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 12 01:36:36.120476 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 12 01:36:36.120485 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 12 01:36:36.120496 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 12 01:36:36.120513 kernel: Using GB pages for direct mapping Mar 12 01:36:36.120523 kernel: Secure boot disabled Mar 12 01:36:36.120533 kernel: ACPI: Early table checksum verification disabled Mar 12 01:36:36.120543 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 12 01:36:36.120620 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 12 01:36:36.120633 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:36:36.120644 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:36:36.120659 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 12 01:36:36.120670 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:36:36.120681 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:36:36.120692 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:36:36.120702 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:36:36.120713 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 12 01:36:36.120724 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 12 01:36:36.120740 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 12 01:36:36.120751 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 12 01:36:36.120762 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 12 01:36:36.120773 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 12 01:36:36.120783 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 12 01:36:36.120794 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 12 01:36:36.120804 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 12 01:36:36.120813 kernel: No NUMA configuration found Mar 12 01:36:36.120823 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 12 01:36:36.120837 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 12 01:36:36.120847 kernel: Zone ranges: Mar 12 01:36:36.120857 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 12 01:36:36.120867 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 12 01:36:36.120877 kernel: Normal empty Mar 12 01:36:36.120887 kernel: Movable zone start for each node Mar 12 01:36:36.120896 kernel: Early memory node ranges Mar 12 01:36:36.120906 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 12 01:36:36.120916 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 12 01:36:36.120926 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 12 01:36:36.120939 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 12 01:36:36.120949 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 12 01:36:36.120959 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 12 01:36:36.120969 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 12 01:36:36.120979 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:36:36.120989 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 12 01:36:36.120999 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 12 01:36:36.121009 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:36:36.121019 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 12 01:36:36.121032 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 12 01:36:36.121042 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 12 01:36:36.121051 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 12 01:36:36.121061 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 12 01:36:36.121071 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 12 01:36:36.121081 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 12 01:36:36.121091 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 12 01:36:36.121101 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 12 01:36:36.121111 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 12 01:36:36.121124 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 12 01:36:36.121134 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 12 01:36:36.121144 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 12 01:36:36.121153 kernel: TSC deadline timer available Mar 12 01:36:36.121163 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 12 01:36:36.121173 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 12 01:36:36.121183 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 12 01:36:36.121192 kernel: kvm-guest: setup PV sched yield Mar 12 01:36:36.121203 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 12 01:36:36.121216 kernel: Booting paravirtualized kernel on KVM Mar 12 01:36:36.121226 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 12 01:36:36.121236 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 12 01:36:36.121246 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 12 01:36:36.121256 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 12 01:36:36.121266 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 12 01:36:36.121276 kernel: kvm-guest: PV spinlocks enabled Mar 12 01:36:36.121286 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 12 01:36:36.121297 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:36:36.121311 kernel: random: crng init done Mar 12 01:36:36.121321 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 01:36:36.121331 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 01:36:36.121341 kernel: Fallback order for Node 0: 0 Mar 12 01:36:36.121351 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 12 01:36:36.121405 kernel: Policy zone: DMA32 Mar 12 01:36:36.121417 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 01:36:36.121427 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 12 01:36:36.121441 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 12 01:36:36.121451 kernel: ftrace: allocating 37996 entries in 149 pages Mar 12 01:36:36.121461 kernel: ftrace: allocated 149 pages with 4 groups Mar 12 01:36:36.121471 kernel: Dynamic Preempt: voluntary Mar 12 01:36:36.121481 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 01:36:36.121502 kernel: rcu: RCU event tracing is enabled. Mar 12 01:36:36.121516 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 12 01:36:36.121527 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 01:36:36.121537 kernel: Rude variant of Tasks RCU enabled. Mar 12 01:36:36.121548 kernel: Tracing variant of Tasks RCU enabled. Mar 12 01:36:36.121600 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 01:36:36.121612 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 12 01:36:36.121626 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 12 01:36:36.121636 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 01:36:36.121647 kernel: Console: colour dummy device 80x25 Mar 12 01:36:36.121657 kernel: printk: console [ttyS0] enabled Mar 12 01:36:36.121668 kernel: ACPI: Core revision 20230628 Mar 12 01:36:36.121682 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 12 01:36:36.121692 kernel: APIC: Switch to symmetric I/O mode setup Mar 12 01:36:36.121703 kernel: x2apic enabled Mar 12 01:36:36.121713 kernel: APIC: Switched APIC routing to: physical x2apic Mar 12 01:36:36.121724 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 12 01:36:36.121735 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 12 01:36:36.121745 kernel: kvm-guest: setup PV IPIs Mar 12 01:36:36.121755 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 12 01:36:36.121766 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 12 01:36:36.121780 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 12 01:36:36.121790 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 12 01:36:36.121802 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 12 01:36:36.121815 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 12 01:36:36.121826 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 12 01:36:36.121980 kernel: Spectre V2 : Mitigation: Retpolines Mar 12 01:36:36.121993 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 12 01:36:36.122004 kernel: Speculative Store Bypass: Vulnerable Mar 12 01:36:36.122014 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 12 01:36:36.122030 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 12 01:36:36.122041 kernel: active return thunk: srso_alias_return_thunk Mar 12 01:36:36.122051 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 12 01:36:36.122062 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 12 01:36:36.122072 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 12 01:36:36.122083 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 12 01:36:36.122093 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 12 01:36:36.122104 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 12 01:36:36.122117 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 12 01:36:36.122128 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 12 01:36:36.122138 kernel: Freeing SMP alternatives memory: 32K Mar 12 01:36:36.122149 kernel: pid_max: default: 32768 minimum: 301 Mar 12 01:36:36.122159 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 12 01:36:36.122169 kernel: landlock: Up and running. Mar 12 01:36:36.122180 kernel: SELinux: Initializing. Mar 12 01:36:36.122191 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:36:36.122201 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:36:36.122236 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 12 01:36:36.122247 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:36:36.122258 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:36:36.122268 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:36:36.122279 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 12 01:36:36.122289 kernel: signal: max sigframe size: 1776 Mar 12 01:36:36.122300 kernel: rcu: Hierarchical SRCU implementation. Mar 12 01:36:36.122310 kernel: rcu: Max phase no-delay instances is 400. Mar 12 01:36:36.122321 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 12 01:36:36.122334 kernel: smp: Bringing up secondary CPUs ... Mar 12 01:36:36.122345 kernel: smpboot: x86: Booting SMP configuration: Mar 12 01:36:36.122355 kernel: .... node #0, CPUs: #1 #2 #3 Mar 12 01:36:36.122407 kernel: smp: Brought up 1 node, 4 CPUs Mar 12 01:36:36.122419 kernel: smpboot: Max logical packages: 1 Mar 12 01:36:36.122430 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 12 01:36:36.122441 kernel: devtmpfs: initialized Mar 12 01:36:36.122451 kernel: x86/mm: Memory block size: 128MB Mar 12 01:36:36.122462 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 12 01:36:36.122477 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 12 01:36:36.122488 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 12 01:36:36.122498 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 12 01:36:36.122509 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 12 01:36:36.122520 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 01:36:36.122530 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 12 01:36:36.122541 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 01:36:36.122551 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 01:36:36.122620 kernel: audit: initializing netlink subsys (disabled) Mar 12 01:36:36.122635 kernel: audit: type=2000 audit(1773279394.277:1): state=initialized audit_enabled=0 res=1 Mar 12 01:36:36.122646 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 01:36:36.122657 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 12 01:36:36.122668 kernel: cpuidle: using governor menu Mar 12 01:36:36.122679 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 01:36:36.122689 kernel: dca service started, version 1.12.1 Mar 12 01:36:36.122700 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 12 01:36:36.122711 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 12 01:36:36.122721 kernel: PCI: Using configuration type 1 for base access Mar 12 01:36:36.122735 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 12 01:36:36.122746 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 01:36:36.122757 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 01:36:36.122767 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 01:36:36.122778 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 01:36:36.122789 kernel: ACPI: Added _OSI(Module Device) Mar 12 01:36:36.122799 kernel: ACPI: Added _OSI(Processor Device) Mar 12 01:36:36.122809 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 01:36:36.122820 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 01:36:36.122833 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 12 01:36:36.122844 kernel: ACPI: Interpreter enabled Mar 12 01:36:36.122854 kernel: ACPI: PM: (supports S0 S3 S5) Mar 12 01:36:36.122865 kernel: ACPI: Using IOAPIC for interrupt routing Mar 12 01:36:36.122875 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 12 01:36:36.122886 kernel: PCI: Using E820 reservations for host bridge windows Mar 12 01:36:36.122897 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 12 01:36:36.122907 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 12 01:36:36.123140 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 01:36:36.123312 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 12 01:36:36.123523 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 12 01:36:36.123539 kernel: PCI host bridge to bus 0000:00 Mar 12 01:36:36.123752 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 12 01:36:36.123904 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 12 01:36:36.124051 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 12 01:36:36.124203 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 12 01:36:36.124419 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 12 01:36:36.124641 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 12 01:36:36.124792 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 12 01:36:36.124973 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 12 01:36:36.125149 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 12 01:36:36.125314 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 12 01:36:36.125531 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 12 01:36:36.125860 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 12 01:36:36.126022 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 12 01:36:36.126180 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 12 01:36:36.126349 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 12 01:36:36.126820 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 12 01:36:36.127000 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 12 01:36:36.127167 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 12 01:36:36.127346 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 12 01:36:36.127657 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 12 01:36:36.127831 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 12 01:36:36.128030 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 12 01:36:36.128221 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 12 01:36:36.128495 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 12 01:36:36.128716 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 12 01:36:36.128890 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 12 01:36:36.129050 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 12 01:36:36.129215 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 12 01:36:36.129424 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 12 01:36:36.129697 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 12 01:36:36.129864 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 12 01:36:36.130017 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 12 01:36:36.130186 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 12 01:36:36.130340 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 12 01:36:36.130355 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 12 01:36:36.130409 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 12 01:36:36.130421 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 12 01:36:36.130437 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 12 01:36:36.130447 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 12 01:36:36.130458 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 12 01:36:36.130469 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 12 01:36:36.130479 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 12 01:36:36.130490 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 12 01:36:36.130501 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 12 01:36:36.130511 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 12 01:36:36.130522 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 12 01:36:36.130536 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 12 01:36:36.130547 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 12 01:36:36.130601 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 12 01:36:36.130613 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 12 01:36:36.130625 kernel: iommu: Default domain type: Translated Mar 12 01:36:36.130635 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 12 01:36:36.130646 kernel: efivars: Registered efivars operations Mar 12 01:36:36.130657 kernel: PCI: Using ACPI for IRQ routing Mar 12 01:36:36.130668 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 12 01:36:36.130682 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 12 01:36:36.130693 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 12 01:36:36.130704 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 12 01:36:36.130714 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 12 01:36:36.130878 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 12 01:36:36.131033 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 12 01:36:36.131197 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 12 01:36:36.131212 kernel: vgaarb: loaded Mar 12 01:36:36.131224 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 12 01:36:36.131239 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 12 01:36:36.131250 kernel: clocksource: Switched to clocksource kvm-clock Mar 12 01:36:36.131261 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 01:36:36.131273 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 01:36:36.131284 kernel: pnp: PnP ACPI init Mar 12 01:36:36.131535 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 12 01:36:36.131614 kernel: pnp: PnP ACPI: found 6 devices Mar 12 01:36:36.131632 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 12 01:36:36.131650 kernel: NET: Registered PF_INET protocol family Mar 12 01:36:36.131661 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 01:36:36.131672 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 01:36:36.131683 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 01:36:36.131694 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 01:36:36.131706 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 01:36:36.131717 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 01:36:36.131728 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:36:36.131739 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:36:36.131754 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 01:36:36.131765 kernel: NET: Registered PF_XDP protocol family Mar 12 01:36:36.131934 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 12 01:36:36.132097 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 12 01:36:36.132251 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 12 01:36:36.132453 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 12 01:36:36.132668 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 12 01:36:36.132823 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 12 01:36:36.132990 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 12 01:36:36.133138 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 12 01:36:36.133154 kernel: PCI: CLS 0 bytes, default 64 Mar 12 01:36:36.133165 kernel: Initialise system trusted keyrings Mar 12 01:36:36.133176 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 01:36:36.133188 kernel: Key type asymmetric registered Mar 12 01:36:36.133199 kernel: Asymmetric key parser 'x509' registered Mar 12 01:36:36.133211 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 12 01:36:36.133222 kernel: io scheduler mq-deadline registered Mar 12 01:36:36.133239 kernel: io scheduler kyber registered Mar 12 01:36:36.133250 kernel: io scheduler bfq registered Mar 12 01:36:36.133262 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 12 01:36:36.133274 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 12 01:36:36.133285 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 12 01:36:36.133296 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 12 01:36:36.133307 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 01:36:36.133319 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 12 01:36:36.133330 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 12 01:36:36.133347 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 12 01:36:36.133358 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 12 01:36:36.133721 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 12 01:36:36.133747 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 12 01:36:36.133920 kernel: rtc_cmos 00:04: registered as rtc0 Mar 12 01:36:36.134107 kernel: rtc_cmos 00:04: setting system clock to 2026-03-12T01:36:35 UTC (1773279395) Mar 12 01:36:36.134306 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 12 01:36:36.134335 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 12 01:36:36.134357 kernel: efifb: probing for efifb Mar 12 01:36:36.134446 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 12 01:36:36.134463 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 12 01:36:36.134476 kernel: efifb: scrolling: redraw Mar 12 01:36:36.134487 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 12 01:36:36.134499 kernel: Console: switching to colour frame buffer device 100x37 Mar 12 01:36:36.134511 kernel: fb0: EFI VGA frame buffer device Mar 12 01:36:36.134523 kernel: pstore: Using crash dump compression: deflate Mar 12 01:36:36.134534 kernel: pstore: Registered efi_pstore as persistent store backend Mar 12 01:36:36.134554 kernel: NET: Registered PF_INET6 protocol family Mar 12 01:36:36.134644 kernel: Segment Routing with IPv6 Mar 12 01:36:36.134658 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 01:36:36.134670 kernel: NET: Registered PF_PACKET protocol family Mar 12 01:36:36.134682 kernel: Key type dns_resolver registered Mar 12 01:36:36.134694 kernel: IPI shorthand broadcast: enabled Mar 12 01:36:36.134736 kernel: sched_clock: Marking stable (1082017624, 389000157)->(1955777494, -484759713) Mar 12 01:36:36.134753 kernel: registered taskstats version 1 Mar 12 01:36:36.134765 kernel: Loading compiled-in X.509 certificates Mar 12 01:36:36.134781 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 67287262975845098ef9f337a0e8baa9afd38510' Mar 12 01:36:36.134793 kernel: Key type .fscrypt registered Mar 12 01:36:36.134805 kernel: Key type fscrypt-provisioning registered Mar 12 01:36:36.134817 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 01:36:36.134829 kernel: ima: Allocated hash algorithm: sha1 Mar 12 01:36:36.134841 kernel: ima: No architecture policies found Mar 12 01:36:36.134852 kernel: clk: Disabling unused clocks Mar 12 01:36:36.134864 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 12 01:36:36.134882 kernel: Write protecting the kernel read-only data: 36864k Mar 12 01:36:36.134895 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 12 01:36:36.134907 kernel: Run /init as init process Mar 12 01:36:36.134919 kernel: with arguments: Mar 12 01:36:36.134931 kernel: /init Mar 12 01:36:36.134943 kernel: with environment: Mar 12 01:36:36.134955 kernel: HOME=/ Mar 12 01:36:36.134966 kernel: TERM=linux Mar 12 01:36:36.134982 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:36:36.135006 systemd[1]: Detected virtualization kvm. Mar 12 01:36:36.135019 systemd[1]: Detected architecture x86-64. Mar 12 01:36:36.135031 systemd[1]: Running in initrd. Mar 12 01:36:36.135043 systemd[1]: No hostname configured, using default hostname. Mar 12 01:36:36.135056 systemd[1]: Hostname set to . Mar 12 01:36:36.135069 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:36:36.135081 systemd[1]: Queued start job for default target initrd.target. Mar 12 01:36:36.135100 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:36:36.135113 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:36:36.135127 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 01:36:36.135140 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:36:36.135153 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 01:36:36.135174 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 01:36:36.135190 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 01:36:36.135203 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 01:36:36.135216 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:36:36.135229 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:36:36.135242 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:36:36.135254 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:36:36.135274 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:36:36.135287 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:36:36.135300 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:36:36.135323 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:36:36.135336 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 01:36:36.135349 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 01:36:36.135404 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:36:36.135423 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:36:36.135437 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:36:36.135456 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:36:36.135469 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 01:36:36.135482 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:36:36.135494 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 01:36:36.135507 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 01:36:36.135519 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:36:36.135531 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:36:36.135543 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:36:36.135701 systemd-journald[194]: Collecting audit messages is disabled. Mar 12 01:36:36.135739 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 01:36:36.135752 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:36:36.135765 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 01:36:36.135783 systemd-journald[194]: Journal started Mar 12 01:36:36.135807 systemd-journald[194]: Runtime Journal (/run/log/journal/d0c4fb7dc9634a53903872631ef87366) is 6.0M, max 48.3M, 42.2M free. Mar 12 01:36:36.143264 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:36:36.152813 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:36:36.155306 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:36:36.168946 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:36:36.175943 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:36:36.184073 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:36:36.194734 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:36:36.202637 systemd-modules-load[195]: Inserted module 'overlay' Mar 12 01:36:36.205750 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:36:36.217541 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:36:36.226295 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:36:36.241817 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 01:36:36.254957 dracut-cmdline[224]: dracut-dracut-053 Mar 12 01:36:36.258791 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:36:36.288672 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 01:36:36.291647 kernel: Bridge firewalling registered Mar 12 01:36:36.291539 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 12 01:36:36.295622 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:36:36.310954 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:36:36.327273 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:36:36.338836 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:36:36.389193 systemd-resolved[283]: Positive Trust Anchors: Mar 12 01:36:36.389232 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:36:36.389280 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:36:36.392941 systemd-resolved[283]: Defaulting to hostname 'linux'. Mar 12 01:36:36.394509 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:36:36.399324 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:36:36.456632 kernel: SCSI subsystem initialized Mar 12 01:36:36.470763 kernel: Loading iSCSI transport class v2.0-870. Mar 12 01:36:36.495648 kernel: iscsi: registered transport (tcp) Mar 12 01:36:36.529467 kernel: iscsi: registered transport (qla4xxx) Mar 12 01:36:36.529542 kernel: QLogic iSCSI HBA Driver Mar 12 01:36:36.619292 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 01:36:36.643976 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 01:36:36.704686 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 01:36:36.704770 kernel: device-mapper: uevent: version 1.0.3 Mar 12 01:36:36.707741 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 12 01:36:36.759641 kernel: raid6: avx2x4 gen() 20538 MB/s Mar 12 01:36:36.777634 kernel: raid6: avx2x2 gen() 22584 MB/s Mar 12 01:36:36.796693 kernel: raid6: avx2x1 gen() 25412 MB/s Mar 12 01:36:36.796752 kernel: raid6: using algorithm avx2x1 gen() 25412 MB/s Mar 12 01:36:36.816784 kernel: raid6: .... xor() 23342 MB/s, rmw enabled Mar 12 01:36:36.816885 kernel: raid6: using avx2x2 recovery algorithm Mar 12 01:36:36.838654 kernel: xor: automatically using best checksumming function avx Mar 12 01:36:37.003661 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 01:36:37.020145 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:36:37.034783 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:36:37.049854 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 12 01:36:37.054489 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:36:37.072766 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 01:36:37.088084 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Mar 12 01:36:37.129759 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:36:37.143980 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:36:37.217716 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:36:37.239755 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 01:36:37.258773 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 01:36:37.270312 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:36:37.278716 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:36:37.286790 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:36:37.303919 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 12 01:36:37.303927 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 01:36:37.318118 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 12 01:36:37.319923 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:36:37.324406 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:36:37.341214 kernel: cryptd: max_cpu_qlen set to 1000 Mar 12 01:36:37.341235 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 01:36:37.341247 kernel: GPT:9289727 != 19775487 Mar 12 01:36:37.341257 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 01:36:37.347058 kernel: GPT:9289727 != 19775487 Mar 12 01:36:37.347108 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 01:36:37.347135 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:36:37.349228 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:36:37.357319 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:36:37.357654 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:36:37.360291 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:36:37.371796 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:36:37.376217 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:36:37.389838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:36:37.396749 kernel: libata version 3.00 loaded. Mar 12 01:36:37.389975 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:36:37.405619 kernel: AVX2 version of gcm_enc/dec engaged. Mar 12 01:36:37.405647 kernel: AES CTR mode by8 optimization enabled Mar 12 01:36:37.413239 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:36:37.426828 kernel: ahci 0000:00:1f.2: version 3.0 Mar 12 01:36:37.427042 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 12 01:36:37.436612 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 12 01:36:37.436861 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (464) Mar 12 01:36:37.436874 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 12 01:36:37.452733 kernel: scsi host0: ahci Mar 12 01:36:37.454725 kernel: scsi host1: ahci Mar 12 01:36:37.456519 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 12 01:36:37.493443 kernel: BTRFS: device fsid 94537345-7f6b-4b2a-965f-248bd6f0b7eb devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (465) Mar 12 01:36:37.493467 kernel: scsi host2: ahci Mar 12 01:36:37.493706 kernel: scsi host3: ahci Mar 12 01:36:37.493857 kernel: scsi host4: ahci Mar 12 01:36:37.494003 kernel: scsi host5: ahci Mar 12 01:36:37.494154 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 12 01:36:37.494165 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 12 01:36:37.494175 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 12 01:36:37.494184 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 12 01:36:37.494194 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 12 01:36:37.494203 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 12 01:36:37.462812 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:36:37.506440 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:36:37.511348 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 12 01:36:37.517482 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 12 01:36:37.520258 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 12 01:36:37.547842 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 01:36:37.554867 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:36:37.564461 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:36:37.564486 disk-uuid[575]: Primary Header is updated. Mar 12 01:36:37.564486 disk-uuid[575]: Secondary Entries is updated. Mar 12 01:36:37.564486 disk-uuid[575]: Secondary Header is updated. Mar 12 01:36:37.574596 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:36:37.603836 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:36:37.807355 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 12 01:36:37.807460 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 12 01:36:37.807476 kernel: ata3.00: applying bridge limits Mar 12 01:36:37.812617 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 12 01:36:37.812657 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 12 01:36:37.814612 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 12 01:36:37.817623 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 12 01:36:37.819665 kernel: ata3.00: configured for UDMA/100 Mar 12 01:36:37.821641 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 12 01:36:37.829660 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 12 01:36:37.876793 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 12 01:36:37.877058 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 01:36:37.890646 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 12 01:36:38.582984 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:36:38.585071 disk-uuid[576]: The operation has completed successfully. Mar 12 01:36:38.675672 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 01:36:38.675862 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 01:36:38.731917 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 01:36:38.762008 sh[601]: Success Mar 12 01:36:38.824121 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 12 01:36:38.907878 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 01:36:38.940088 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 01:36:38.952536 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 01:36:38.982338 kernel: BTRFS info (device dm-0): first mount of filesystem 94537345-7f6b-4b2a-965f-248bd6f0b7eb Mar 12 01:36:38.982455 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:36:38.982474 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 12 01:36:38.992071 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 12 01:36:38.992150 kernel: BTRFS info (device dm-0): using free space tree Mar 12 01:36:39.016459 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 01:36:39.023024 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 01:36:39.043894 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 01:36:39.052854 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 01:36:39.099497 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:36:39.099620 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:36:39.099642 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:36:39.116439 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:36:39.141768 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 12 01:36:39.151619 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:36:39.170890 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 01:36:39.198330 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 01:36:39.364663 ignition[709]: Ignition 2.19.0 Mar 12 01:36:39.364687 ignition[709]: Stage: fetch-offline Mar 12 01:36:39.364752 ignition[709]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:36:39.364769 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:36:39.364935 ignition[709]: parsed url from cmdline: "" Mar 12 01:36:39.364942 ignition[709]: no config URL provided Mar 12 01:36:39.364952 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 01:36:39.364968 ignition[709]: no config at "/usr/lib/ignition/user.ign" Mar 12 01:36:39.365011 ignition[709]: op(1): [started] loading QEMU firmware config module Mar 12 01:36:39.365024 ignition[709]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 12 01:36:39.391721 ignition[709]: op(1): [finished] loading QEMU firmware config module Mar 12 01:36:39.433115 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:36:39.471517 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:36:39.526738 systemd-networkd[790]: lo: Link UP Mar 12 01:36:39.526775 systemd-networkd[790]: lo: Gained carrier Mar 12 01:36:39.534143 systemd-networkd[790]: Enumeration completed Mar 12 01:36:39.536046 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:36:39.536622 systemd-networkd[790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:36:39.536628 systemd-networkd[790]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:36:39.539728 systemd-networkd[790]: eth0: Link UP Mar 12 01:36:39.539734 systemd-networkd[790]: eth0: Gained carrier Mar 12 01:36:39.539746 systemd-networkd[790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:36:39.602183 systemd[1]: Reached target network.target - Network. Mar 12 01:36:39.635742 systemd-networkd[790]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:36:39.729009 ignition[709]: parsing config with SHA512: c19442e2e5192944033ef80665a97b93fb8429350bcfd2a6f95125fdbb79d63b36a1e879caf24f76e2582f216d46301a2b9b78ff5e8c63daa2c55cf33603bbf4 Mar 12 01:36:39.734732 unknown[709]: fetched base config from "system" Mar 12 01:36:39.734745 unknown[709]: fetched user config from "qemu" Mar 12 01:36:39.735105 ignition[709]: fetch-offline: fetch-offline passed Mar 12 01:36:39.744656 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:36:39.735181 ignition[709]: Ignition finished successfully Mar 12 01:36:39.754678 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 12 01:36:39.783071 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 01:36:39.868844 ignition[794]: Ignition 2.19.0 Mar 12 01:36:39.868888 ignition[794]: Stage: kargs Mar 12 01:36:39.869065 ignition[794]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:36:39.869077 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:36:39.871269 ignition[794]: kargs: kargs passed Mar 12 01:36:39.881195 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 01:36:39.871329 ignition[794]: Ignition finished successfully Mar 12 01:36:39.924945 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 01:36:39.974135 ignition[802]: Ignition 2.19.0 Mar 12 01:36:39.975326 ignition[802]: Stage: disks Mar 12 01:36:39.975727 ignition[802]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:36:39.975749 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:36:39.978117 ignition[802]: disks: disks passed Mar 12 01:36:39.978180 ignition[802]: Ignition finished successfully Mar 12 01:36:39.999265 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 01:36:40.000930 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 01:36:40.012298 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 01:36:40.020300 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:36:40.042166 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:36:40.045971 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:36:40.073112 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 01:36:40.105350 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 12 01:36:40.114074 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 01:36:40.143881 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 01:36:40.437603 kernel: EXT4-fs (vda9): mounted filesystem f90926b1-4cc2-4a2d-8c45-4ec584c98779 r/w with ordered data mode. Quota mode: none. Mar 12 01:36:40.441859 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 01:36:40.451990 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 01:36:40.474310 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:36:40.491069 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 01:36:40.517477 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (820) Mar 12 01:36:40.517512 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:36:40.517530 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:36:40.517545 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:36:40.515859 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 12 01:36:40.529955 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:36:40.515954 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 01:36:40.515996 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:36:40.540673 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:36:40.545511 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 01:36:40.561826 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 01:36:40.609292 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 01:36:40.615732 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Mar 12 01:36:40.622024 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 01:36:40.628294 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 01:36:40.766820 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 01:36:40.782723 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 01:36:40.789188 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 01:36:40.800687 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 01:36:40.807948 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:36:40.831861 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 01:36:40.846116 ignition[935]: INFO : Ignition 2.19.0 Mar 12 01:36:40.846116 ignition[935]: INFO : Stage: mount Mar 12 01:36:40.851275 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:36:40.851275 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:36:40.859282 ignition[935]: INFO : mount: mount passed Mar 12 01:36:40.861945 ignition[935]: INFO : Ignition finished successfully Mar 12 01:36:40.867294 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 01:36:40.885756 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 01:36:40.900275 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:36:40.924400 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (948) Mar 12 01:36:40.924458 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:36:40.924471 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:36:40.927022 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:36:40.934644 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:36:40.937477 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:36:40.961992 ignition[965]: INFO : Ignition 2.19.0 Mar 12 01:36:40.961992 ignition[965]: INFO : Stage: files Mar 12 01:36:40.967239 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:36:40.967239 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:36:40.967239 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Mar 12 01:36:40.979771 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 01:36:40.979771 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 01:36:40.993525 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 01:36:40.998835 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 01:36:40.998835 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 01:36:40.998835 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 12 01:36:40.998835 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 12 01:36:40.998835 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:36:40.998835 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 12 01:36:40.994808 unknown[965]: wrote ssh authorized keys file for user: core Mar 12 01:36:41.048904 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 12 01:36:41.143528 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:36:41.143528 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 12 01:36:41.153979 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 01:36:41.153979 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:36:41.153979 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:36:41.153979 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:36:41.153979 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:36:41.153979 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:36:41.153979 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:36:41.153979 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:36:41.153979 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:36:41.153979 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:36:41.153979 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:36:41.153979 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:36:41.153979 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 12 01:36:41.346932 systemd-networkd[790]: eth0: Gained IPv6LL Mar 12 01:36:41.602083 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 12 01:36:42.075949 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 12 01:36:42.075949 ignition[965]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 12 01:36:42.085235 ignition[965]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 12 01:36:42.091927 ignition[965]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 12 01:36:42.091927 ignition[965]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 12 01:36:42.091927 ignition[965]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 12 01:36:42.104627 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:36:42.109745 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:36:42.109745 ignition[965]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 12 01:36:42.109745 ignition[965]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Mar 12 01:36:42.121196 ignition[965]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:36:42.126617 ignition[965]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:36:42.126617 ignition[965]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Mar 12 01:36:42.135236 ignition[965]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Mar 12 01:36:42.170068 ignition[965]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:36:42.175517 ignition[965]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:36:42.180013 ignition[965]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Mar 12 01:36:42.180013 ignition[965]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Mar 12 01:36:42.187775 ignition[965]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 01:36:42.191744 ignition[965]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:36:42.196719 ignition[965]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:36:42.201313 ignition[965]: INFO : files: files passed Mar 12 01:36:42.203390 ignition[965]: INFO : Ignition finished successfully Mar 12 01:36:42.207963 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 01:36:42.222878 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 01:36:42.224967 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 01:36:42.236539 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Mar 12 01:36:42.240539 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:36:42.240539 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:36:42.246327 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:36:42.245906 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:36:42.253509 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 01:36:42.256240 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 01:36:42.279991 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 01:36:42.283623 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 01:36:42.312835 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 01:36:42.312994 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 01:36:42.318829 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 01:36:42.324439 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 01:36:42.329553 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 01:36:42.330491 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 01:36:42.355824 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:36:42.376812 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 01:36:42.395514 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:36:42.397180 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:36:42.403248 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 01:36:42.409239 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 01:36:42.409429 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:36:42.417858 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 01:36:42.423612 systemd[1]: Stopped target basic.target - Basic System. Mar 12 01:36:42.428687 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 01:36:42.433618 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:36:42.439342 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 01:36:42.441237 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 01:36:42.449247 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:36:42.454496 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 01:36:42.464431 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 01:36:42.470457 systemd[1]: Stopped target swap.target - Swaps. Mar 12 01:36:42.477285 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 01:36:42.477481 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:36:42.487235 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:36:42.489428 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:36:42.497291 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 01:36:42.503679 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:36:42.510870 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 01:36:42.511087 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 01:36:42.518515 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 01:36:42.518759 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:36:42.524309 systemd[1]: Stopped target paths.target - Path Units. Mar 12 01:36:42.525660 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 01:36:42.531231 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:36:42.541218 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 01:36:42.546170 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 01:36:42.551229 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 01:36:42.553616 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:36:42.559276 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 01:36:42.562429 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:36:42.569672 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 01:36:42.573677 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:36:42.580773 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 01:36:42.583325 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 01:36:42.601788 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 01:36:42.607131 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 01:36:42.607257 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:36:42.617405 ignition[1020]: INFO : Ignition 2.19.0 Mar 12 01:36:42.617405 ignition[1020]: INFO : Stage: umount Mar 12 01:36:42.617405 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:36:42.617405 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:36:42.631008 ignition[1020]: INFO : umount: umount passed Mar 12 01:36:42.631008 ignition[1020]: INFO : Ignition finished successfully Mar 12 01:36:42.645985 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 01:36:42.651976 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 01:36:42.655335 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:36:42.662919 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 01:36:42.666877 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:36:42.679094 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 01:36:42.682752 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 01:36:42.685319 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 01:36:42.693859 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 01:36:42.693998 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 01:36:42.706301 systemd[1]: Stopped target network.target - Network. Mar 12 01:36:42.713060 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 01:36:42.713172 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 01:36:42.721321 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 01:36:42.721447 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 01:36:42.730697 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 01:36:42.730785 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 01:36:42.738655 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 01:36:42.738736 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 01:36:42.746858 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 01:36:42.752729 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 01:36:42.753624 systemd-networkd[790]: eth0: DHCPv6 lease lost Mar 12 01:36:42.761998 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 01:36:42.765643 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 01:36:42.774075 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 01:36:42.777325 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 01:36:42.786192 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 01:36:42.789444 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 01:36:42.798148 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 01:36:42.798226 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:36:42.806870 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 01:36:42.806959 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 01:36:42.824818 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 01:36:42.826054 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 01:36:42.826130 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:36:42.831639 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 01:36:42.831710 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:36:42.840239 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 01:36:42.840307 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 01:36:42.847467 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 01:36:42.847530 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:36:42.853458 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:36:42.881280 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 01:36:42.881613 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:36:42.887555 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 01:36:42.887789 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 01:36:42.893987 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 01:36:42.894048 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 01:36:42.898781 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 01:36:42.898822 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:36:42.904398 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 01:36:42.904461 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:36:42.910467 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 01:36:42.910540 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 01:36:42.915451 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:36:42.915502 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:36:42.934791 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 01:36:42.940705 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 01:36:42.940767 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:36:42.944767 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:36:42.944818 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:36:42.951971 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 01:36:42.952099 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 01:36:42.960060 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 01:36:42.964541 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 01:36:42.979964 systemd[1]: Switching root. Mar 12 01:36:43.016679 systemd-journald[194]: Journal stopped Mar 12 01:36:44.275966 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 12 01:36:44.276035 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 01:36:44.276053 kernel: SELinux: policy capability open_perms=1 Mar 12 01:36:44.276068 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 01:36:44.276078 kernel: SELinux: policy capability always_check_network=0 Mar 12 01:36:44.276088 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 01:36:44.276098 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 01:36:44.276109 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 01:36:44.276119 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 01:36:44.276129 kernel: audit: type=1403 audit(1773279403.239:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 01:36:44.276147 systemd[1]: Successfully loaded SELinux policy in 46.923ms. Mar 12 01:36:44.276167 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.490ms. Mar 12 01:36:44.276179 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:36:44.276190 systemd[1]: Detected virtualization kvm. Mar 12 01:36:44.276201 systemd[1]: Detected architecture x86-64. Mar 12 01:36:44.276212 systemd[1]: Detected first boot. Mar 12 01:36:44.276223 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:36:44.276234 zram_generator::config[1082]: No configuration found. Mar 12 01:36:44.276256 systemd[1]: Populated /etc with preset unit settings. Mar 12 01:36:44.276273 systemd[1]: Queued start job for default target multi-user.target. Mar 12 01:36:44.276284 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 12 01:36:44.276295 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 01:36:44.276307 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 01:36:44.276317 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 01:36:44.276328 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 01:36:44.276339 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 01:36:44.276350 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 01:36:44.276403 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 01:36:44.276415 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 01:36:44.276426 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:36:44.276437 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:36:44.276448 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 01:36:44.276460 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 01:36:44.276471 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 01:36:44.276482 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:36:44.276493 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 12 01:36:44.276511 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:36:44.276522 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 01:36:44.276533 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:36:44.276543 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:36:44.276555 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:36:44.276800 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:36:44.276813 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 01:36:44.276825 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 01:36:44.276840 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 01:36:44.276851 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 01:36:44.276861 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:36:44.276872 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:36:44.276883 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:36:44.276894 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 01:36:44.276905 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 01:36:44.276915 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 01:36:44.276926 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 01:36:44.276936 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:36:44.276950 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 01:36:44.276960 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 01:36:44.276971 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 01:36:44.276983 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 01:36:44.276994 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:36:44.277004 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:36:44.277015 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 01:36:44.277026 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:36:44.277039 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:36:44.277050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:36:44.277060 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 01:36:44.277071 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:36:44.277082 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 01:36:44.277092 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 12 01:36:44.277104 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 12 01:36:44.277115 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:36:44.277128 kernel: fuse: init (API version 7.39) Mar 12 01:36:44.277138 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:36:44.277149 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 01:36:44.277160 kernel: loop: module loaded Mar 12 01:36:44.277170 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 01:36:44.277203 systemd-journald[1177]: Collecting audit messages is disabled. Mar 12 01:36:44.277231 kernel: ACPI: bus type drm_connector registered Mar 12 01:36:44.277242 systemd-journald[1177]: Journal started Mar 12 01:36:44.277266 systemd-journald[1177]: Runtime Journal (/run/log/journal/d0c4fb7dc9634a53903872631ef87366) is 6.0M, max 48.3M, 42.2M free. Mar 12 01:36:44.283617 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:36:44.292684 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:36:44.311660 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:36:44.315297 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 01:36:44.318400 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 01:36:44.321673 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 01:36:44.324623 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 01:36:44.327827 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 01:36:44.331018 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 01:36:44.334159 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 01:36:44.337897 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:36:44.341751 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 01:36:44.341976 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 01:36:44.346125 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:36:44.346394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:36:44.350061 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:36:44.350289 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:36:44.353773 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:36:44.353989 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:36:44.357933 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 01:36:44.358213 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 01:36:44.362083 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:36:44.362398 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:36:44.366454 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:36:44.370167 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 01:36:44.374249 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 01:36:44.388790 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 01:36:44.401749 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 01:36:44.406067 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 01:36:44.409021 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 01:36:44.412281 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 01:36:44.417416 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 01:36:44.420763 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:36:44.422104 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 01:36:44.425258 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:36:44.426971 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:36:44.430877 systemd-journald[1177]: Time spent on flushing to /var/log/journal/d0c4fb7dc9634a53903872631ef87366 is 19.069ms for 971 entries. Mar 12 01:36:44.430877 systemd-journald[1177]: System Journal (/var/log/journal/d0c4fb7dc9634a53903872631ef87366) is 8.0M, max 195.6M, 187.6M free. Mar 12 01:36:44.462776 systemd-journald[1177]: Received client request to flush runtime journal. Mar 12 01:36:44.431526 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:36:44.439992 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:36:44.443905 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 01:36:44.448463 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 01:36:44.454070 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 01:36:44.463016 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 01:36:44.477861 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 12 01:36:44.482758 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 01:36:44.490801 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Mar 12 01:36:44.490833 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Mar 12 01:36:44.491645 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:36:44.499022 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:36:44.512987 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 01:36:44.518179 udevadm[1229]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 12 01:36:44.545547 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 01:36:44.555701 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:36:44.581046 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Mar 12 01:36:44.581092 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Mar 12 01:36:44.589030 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:36:44.864392 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 01:36:44.877859 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:36:44.902774 systemd-udevd[1247]: Using default interface naming scheme 'v255'. Mar 12 01:36:44.926640 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:36:44.937765 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:36:44.946823 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 01:36:44.972642 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 12 01:36:44.986605 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1266) Mar 12 01:36:45.013214 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 01:36:45.044696 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 12 01:36:45.053693 kernel: ACPI: button: Power Button [PWRF] Mar 12 01:36:45.066488 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:36:45.085226 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 12 01:36:45.085533 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 12 01:36:45.085832 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 12 01:36:45.086006 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 12 01:36:45.088355 systemd-networkd[1251]: lo: Link UP Mar 12 01:36:45.088403 systemd-networkd[1251]: lo: Gained carrier Mar 12 01:36:45.091219 systemd-networkd[1251]: Enumeration completed Mar 12 01:36:45.091336 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:36:45.095535 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 12 01:36:45.096276 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:36:45.096773 systemd-networkd[1251]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:36:45.099143 systemd-networkd[1251]: eth0: Link UP Mar 12 01:36:45.100243 systemd-networkd[1251]: eth0: Gained carrier Mar 12 01:36:45.100431 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:36:45.105747 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 01:36:45.119629 systemd-networkd[1251]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:36:45.202617 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 01:36:45.202891 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:36:45.216617 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:36:45.219035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:36:45.230680 kernel: kvm_amd: TSC scaling supported Mar 12 01:36:45.230733 kernel: kvm_amd: Nested Virtualization enabled Mar 12 01:36:45.230779 kernel: kvm_amd: Nested Paging enabled Mar 12 01:36:45.231021 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 12 01:36:45.232485 kernel: kvm_amd: PMU virtualization is disabled Mar 12 01:36:45.274335 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:36:45.288831 kernel: EDAC MC: Ver: 3.0.0 Mar 12 01:36:45.329296 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 12 01:36:45.341916 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 12 01:36:45.347326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:36:45.358684 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:36:45.411437 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 12 01:36:45.417453 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:36:45.430868 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 12 01:36:45.437804 lvm[1300]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:36:45.473115 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 12 01:36:45.476929 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 01:36:45.480533 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 01:36:45.480639 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:36:45.483551 systemd[1]: Reached target machines.target - Containers. Mar 12 01:36:45.487287 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 12 01:36:45.505839 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 01:36:45.511254 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 01:36:45.514834 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:36:45.515942 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 01:36:45.521293 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 12 01:36:45.529720 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 01:36:45.534619 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 01:36:45.544851 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 01:36:45.546044 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 12 01:36:45.554074 kernel: loop0: detected capacity change from 0 to 142488 Mar 12 01:36:45.558195 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 01:36:45.582612 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 01:36:45.616686 kernel: loop1: detected capacity change from 0 to 140768 Mar 12 01:36:45.663632 kernel: loop2: detected capacity change from 0 to 228704 Mar 12 01:36:45.703730 kernel: loop3: detected capacity change from 0 to 142488 Mar 12 01:36:45.721618 kernel: loop4: detected capacity change from 0 to 140768 Mar 12 01:36:45.738709 kernel: loop5: detected capacity change from 0 to 228704 Mar 12 01:36:45.749938 (sd-merge)[1320]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 12 01:36:45.750718 (sd-merge)[1320]: Merged extensions into '/usr'. Mar 12 01:36:45.755467 systemd[1]: Reloading requested from client PID 1308 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 01:36:45.755485 systemd[1]: Reloading... Mar 12 01:36:45.806643 zram_generator::config[1345]: No configuration found. Mar 12 01:36:45.815480 ldconfig[1304]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 01:36:45.954118 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:36:46.012347 systemd[1]: Reloading finished in 256 ms. Mar 12 01:36:46.030902 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 01:36:46.034620 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 01:36:46.055792 systemd[1]: Starting ensure-sysext.service... Mar 12 01:36:46.061418 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:36:46.070552 systemd[1]: Reloading requested from client PID 1392 ('systemctl') (unit ensure-sysext.service)... Mar 12 01:36:46.070704 systemd[1]: Reloading... Mar 12 01:36:46.098082 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 01:36:46.098898 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 01:36:46.101810 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 01:36:46.102196 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Mar 12 01:36:46.102301 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Mar 12 01:36:46.112453 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:36:46.112483 systemd-tmpfiles[1393]: Skipping /boot Mar 12 01:36:46.131147 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:36:46.131305 systemd-tmpfiles[1393]: Skipping /boot Mar 12 01:36:46.151630 zram_generator::config[1421]: No configuration found. Mar 12 01:36:46.314988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:36:46.385417 systemd[1]: Reloading finished in 314 ms. Mar 12 01:36:46.402899 systemd-networkd[1251]: eth0: Gained IPv6LL Mar 12 01:36:46.412101 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 01:36:46.425713 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:36:46.442261 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:36:46.455872 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:36:46.461801 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 01:36:46.465727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:36:46.467903 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:36:46.472968 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:36:46.483846 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:36:46.487494 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:36:46.495055 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 01:36:46.503776 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:36:46.520806 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 01:36:46.525452 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:36:46.529537 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:36:46.529954 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:36:46.531946 augenrules[1495]: No rules Mar 12 01:36:46.534997 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:36:46.539301 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:36:46.539612 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:36:46.543901 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:36:46.544174 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:36:46.548422 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 01:36:46.553121 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 01:36:46.568023 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:36:46.568315 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:36:46.570107 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:36:46.575873 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:36:46.580414 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:36:46.583737 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:36:46.586852 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 01:36:46.593154 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 01:36:46.593322 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:36:46.595521 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 01:36:46.599871 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:36:46.600124 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:36:46.604069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:36:46.604334 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:36:46.608197 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:36:46.608548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:36:46.618766 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:36:46.619055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:36:46.626780 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:36:46.631082 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:36:46.637753 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:36:46.643734 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:36:46.645678 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:36:46.645741 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 01:36:46.645764 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:36:46.647866 systemd[1]: Finished ensure-sysext.service. Mar 12 01:36:46.651534 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 01:36:46.652025 systemd-resolved[1485]: Positive Trust Anchors: Mar 12 01:36:46.652041 systemd-resolved[1485]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:36:46.652067 systemd-resolved[1485]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:36:46.655754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:36:46.655989 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:36:46.657246 systemd-resolved[1485]: Defaulting to hostname 'linux'. Mar 12 01:36:46.659773 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:36:46.663173 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:36:46.663453 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:36:46.667003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:36:46.667238 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:36:46.671068 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:36:46.671330 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:36:46.680716 systemd[1]: Reached target network.target - Network. Mar 12 01:36:46.683211 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 01:36:46.686190 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:36:46.689461 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:36:46.689596 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:36:46.700877 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 12 01:36:46.768095 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 12 01:36:46.771512 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:36:46.774423 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 01:36:46.777877 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 01:36:46.781259 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 01:36:47.307184 systemd-resolved[1485]: Clock change detected. Flushing caches. Mar 12 01:36:47.307216 systemd-timesyncd[1541]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 12 01:36:47.307306 systemd-timesyncd[1541]: Initial clock synchronization to Thu 2026-03-12 01:36:47.307060 UTC. Mar 12 01:36:47.309696 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 01:36:47.309744 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:36:47.312054 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 01:36:47.314891 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 01:36:47.317896 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 01:36:47.321157 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:36:47.324286 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 01:36:47.329300 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 01:36:47.333262 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 01:36:47.338989 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 01:36:47.341858 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:36:47.344367 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:36:47.347276 systemd[1]: System is tainted: cgroupsv1 Mar 12 01:36:47.347336 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:36:47.347360 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:36:47.349115 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 01:36:47.353275 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 12 01:36:47.355710 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 01:36:47.361295 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 01:36:47.365972 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 01:36:47.368698 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 01:36:47.372754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:36:47.377165 jq[1549]: false Mar 12 01:36:47.380149 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 01:36:47.387509 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 01:36:47.390332 dbus-daemon[1547]: [system] SELinux support is enabled Mar 12 01:36:47.393075 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 01:36:47.401082 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 01:36:47.407794 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 01:36:47.411837 extend-filesystems[1551]: Found loop3 Mar 12 01:36:47.414415 extend-filesystems[1551]: Found loop4 Mar 12 01:36:47.414415 extend-filesystems[1551]: Found loop5 Mar 12 01:36:47.414415 extend-filesystems[1551]: Found sr0 Mar 12 01:36:47.414415 extend-filesystems[1551]: Found vda Mar 12 01:36:47.414415 extend-filesystems[1551]: Found vda1 Mar 12 01:36:47.414415 extend-filesystems[1551]: Found vda2 Mar 12 01:36:47.414415 extend-filesystems[1551]: Found vda3 Mar 12 01:36:47.414415 extend-filesystems[1551]: Found usr Mar 12 01:36:47.414415 extend-filesystems[1551]: Found vda4 Mar 12 01:36:47.414415 extend-filesystems[1551]: Found vda6 Mar 12 01:36:47.414415 extend-filesystems[1551]: Found vda7 Mar 12 01:36:47.414415 extend-filesystems[1551]: Found vda9 Mar 12 01:36:47.414415 extend-filesystems[1551]: Checking size of /dev/vda9 Mar 12 01:36:47.530464 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 12 01:36:47.530493 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1263) Mar 12 01:36:47.530507 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 12 01:36:47.425173 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 01:36:47.530716 extend-filesystems[1551]: Resized partition /dev/vda9 Mar 12 01:36:47.429559 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 01:36:47.531040 extend-filesystems[1585]: resize2fs 1.47.1 (20-May-2024) Mar 12 01:36:47.531040 extend-filesystems[1585]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 12 01:36:47.531040 extend-filesystems[1585]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 12 01:36:47.531040 extend-filesystems[1585]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 12 01:36:47.433815 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 01:36:47.552761 extend-filesystems[1551]: Resized filesystem in /dev/vda9 Mar 12 01:36:47.552907 update_engine[1579]: I20260312 01:36:47.469466 1579 main.cc:92] Flatcar Update Engine starting Mar 12 01:36:47.552907 update_engine[1579]: I20260312 01:36:47.473785 1579 update_check_scheduler.cc:74] Next update check in 3m50s Mar 12 01:36:47.441400 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 01:36:47.553273 jq[1584]: true Mar 12 01:36:47.449749 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 01:36:47.472062 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 01:36:47.472372 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 01:36:47.473316 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 01:36:47.473705 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 01:36:47.496061 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 01:36:47.555945 jq[1594]: true Mar 12 01:36:47.502755 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 01:36:47.503124 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 01:36:47.522572 (ntainerd)[1595]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 01:36:47.524997 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 01:36:47.525378 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 01:36:47.548605 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 12 01:36:47.548970 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 12 01:36:47.555879 systemd-logind[1574]: Watching system buttons on /dev/input/event1 (Power Button) Mar 12 01:36:47.555909 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 01:36:47.558755 systemd-logind[1574]: New seat seat0. Mar 12 01:36:47.571029 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 01:36:47.597584 dbus-daemon[1547]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 12 01:36:47.602927 tar[1592]: linux-amd64/LICENSE Mar 12 01:36:47.603369 tar[1592]: linux-amd64/helm Mar 12 01:36:47.612314 systemd[1]: Started update-engine.service - Update Engine. Mar 12 01:36:47.618251 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 01:36:47.618589 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 01:36:47.618858 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 01:36:47.623575 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 01:36:47.623787 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 01:36:47.626229 bash[1630]: Updated "/home/core/.ssh/authorized_keys" Mar 12 01:36:47.629011 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 01:36:47.637812 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 01:36:47.648571 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 01:36:47.654888 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 12 01:36:47.706079 locksmithd[1632]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 01:36:47.730924 sshd_keygen[1583]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 01:36:47.769301 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 01:36:47.784713 containerd[1595]: time="2026-03-12T01:36:47.782837766Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 12 01:36:47.782074 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 01:36:47.791064 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 01:36:47.791557 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 01:36:47.810013 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 01:36:47.821831 containerd[1595]: time="2026-03-12T01:36:47.821759979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:36:47.824151 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 01:36:47.828974 containerd[1595]: time="2026-03-12T01:36:47.824931530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:36:47.828974 containerd[1595]: time="2026-03-12T01:36:47.824966315Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 12 01:36:47.828974 containerd[1595]: time="2026-03-12T01:36:47.824983135Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 12 01:36:47.828974 containerd[1595]: time="2026-03-12T01:36:47.825148564Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 12 01:36:47.828974 containerd[1595]: time="2026-03-12T01:36:47.825163592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 12 01:36:47.828974 containerd[1595]: time="2026-03-12T01:36:47.825227391Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:36:47.828974 containerd[1595]: time="2026-03-12T01:36:47.825240496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:36:47.828974 containerd[1595]: time="2026-03-12T01:36:47.825522353Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:36:47.828974 containerd[1595]: time="2026-03-12T01:36:47.825538012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 12 01:36:47.828974 containerd[1595]: time="2026-03-12T01:36:47.825549924Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:36:47.828974 containerd[1595]: time="2026-03-12T01:36:47.825559452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 12 01:36:47.829208 containerd[1595]: time="2026-03-12T01:36:47.825703601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:36:47.829208 containerd[1595]: time="2026-03-12T01:36:47.825932439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:36:47.829208 containerd[1595]: time="2026-03-12T01:36:47.826093719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:36:47.829208 containerd[1595]: time="2026-03-12T01:36:47.826106684Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 12 01:36:47.829208 containerd[1595]: time="2026-03-12T01:36:47.826238099Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 12 01:36:47.829208 containerd[1595]: time="2026-03-12T01:36:47.826307429Z" level=info msg="metadata content store policy set" policy=shared Mar 12 01:36:47.838148 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 01:36:47.842071 containerd[1595]: time="2026-03-12T01:36:47.839005170Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 12 01:36:47.842071 containerd[1595]: time="2026-03-12T01:36:47.839142827Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 12 01:36:47.842071 containerd[1595]: time="2026-03-12T01:36:47.839239858Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 12 01:36:47.842071 containerd[1595]: time="2026-03-12T01:36:47.839271056Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 12 01:36:47.842071 containerd[1595]: time="2026-03-12T01:36:47.839284351Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 12 01:36:47.842071 containerd[1595]: time="2026-03-12T01:36:47.839476029Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 12 01:36:47.842071 containerd[1595]: time="2026-03-12T01:36:47.841560592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 12 01:36:47.842071 containerd[1595]: time="2026-03-12T01:36:47.841833972Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 12 01:36:47.842071 containerd[1595]: time="2026-03-12T01:36:47.841855653Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 12 01:36:47.842071 containerd[1595]: time="2026-03-12T01:36:47.841870852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 12 01:36:47.842071 containerd[1595]: time="2026-03-12T01:36:47.841887392Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 12 01:36:47.842071 containerd[1595]: time="2026-03-12T01:36:47.841953206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 12 01:36:47.842071 containerd[1595]: time="2026-03-12T01:36:47.841970488Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 12 01:36:47.842071 containerd[1595]: time="2026-03-12T01:36:47.841987920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 12 01:36:47.842476 containerd[1595]: time="2026-03-12T01:36:47.842005754Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 12 01:36:47.842476 containerd[1595]: time="2026-03-12T01:36:47.842020160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 12 01:36:47.842476 containerd[1595]: time="2026-03-12T01:36:47.842032263Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 12 01:36:47.842476 containerd[1595]: time="2026-03-12T01:36:47.842046289Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 12 01:36:47.842476 containerd[1595]: time="2026-03-12T01:36:47.842077587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842476 containerd[1595]: time="2026-03-12T01:36:47.842099569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842476 containerd[1595]: time="2026-03-12T01:36:47.842129034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842476 containerd[1595]: time="2026-03-12T01:36:47.842144733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842476 containerd[1595]: time="2026-03-12T01:36:47.842158749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842476 containerd[1595]: time="2026-03-12T01:36:47.842173377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842476 containerd[1595]: time="2026-03-12T01:36:47.842184217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842476 containerd[1595]: time="2026-03-12T01:36:47.842198163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842476 containerd[1595]: time="2026-03-12T01:36:47.842212379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842476 containerd[1595]: time="2026-03-12T01:36:47.842228379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842896 containerd[1595]: time="2026-03-12T01:36:47.842241724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842896 containerd[1595]: time="2026-03-12T01:36:47.842255971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842896 containerd[1595]: time="2026-03-12T01:36:47.842270448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842896 containerd[1595]: time="2026-03-12T01:36:47.842289674Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 12 01:36:47.842896 containerd[1595]: time="2026-03-12T01:36:47.842312546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842896 containerd[1595]: time="2026-03-12T01:36:47.842327545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842896 containerd[1595]: time="2026-03-12T01:36:47.842353282Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 12 01:36:47.842896 containerd[1595]: time="2026-03-12T01:36:47.842408836Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 12 01:36:47.842896 containerd[1595]: time="2026-03-12T01:36:47.842472705Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 12 01:36:47.842896 containerd[1595]: time="2026-03-12T01:36:47.842488485Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 12 01:36:47.842896 containerd[1595]: time="2026-03-12T01:36:47.842503653Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 12 01:36:47.842896 containerd[1595]: time="2026-03-12T01:36:47.842513412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.842896 containerd[1595]: time="2026-03-12T01:36:47.842528209Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 12 01:36:47.842896 containerd[1595]: time="2026-03-12T01:36:47.842540682Z" level=info msg="NRI interface is disabled by configuration." Mar 12 01:36:47.843250 containerd[1595]: time="2026-03-12T01:36:47.842550290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 12 01:36:47.846294 containerd[1595]: time="2026-03-12T01:36:47.843859004Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 12 01:36:47.846294 containerd[1595]: time="2026-03-12T01:36:47.843918736Z" level=info msg="Connect containerd service" Mar 12 01:36:47.846294 containerd[1595]: time="2026-03-12T01:36:47.843954773Z" level=info msg="using legacy CRI server" Mar 12 01:36:47.846294 containerd[1595]: time="2026-03-12T01:36:47.843962117Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 01:36:47.846294 containerd[1595]: time="2026-03-12T01:36:47.844082862Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 12 01:36:47.846294 containerd[1595]: time="2026-03-12T01:36:47.844734419Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 01:36:47.846294 containerd[1595]: time="2026-03-12T01:36:47.845260702Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 01:36:47.846294 containerd[1595]: time="2026-03-12T01:36:47.845316746Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 01:36:47.846294 containerd[1595]: time="2026-03-12T01:36:47.845354466Z" level=info msg="Start subscribing containerd event" Mar 12 01:36:47.846294 containerd[1595]: time="2026-03-12T01:36:47.845386377Z" level=info msg="Start recovering state" Mar 12 01:36:47.846294 containerd[1595]: time="2026-03-12T01:36:47.845504357Z" level=info msg="Start event monitor" Mar 12 01:36:47.846294 containerd[1595]: time="2026-03-12T01:36:47.845523963Z" level=info msg="Start snapshots syncer" Mar 12 01:36:47.846294 containerd[1595]: time="2026-03-12T01:36:47.845536757Z" level=info msg="Start cni network conf syncer for default" Mar 12 01:36:47.846294 containerd[1595]: time="2026-03-12T01:36:47.845544181Z" level=info msg="Start streaming server" Mar 12 01:36:47.846294 containerd[1595]: time="2026-03-12T01:36:47.845595848Z" level=info msg="containerd successfully booted in 0.064573s" Mar 12 01:36:47.856035 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 12 01:36:47.859854 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 01:36:47.863056 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 01:36:48.079088 tar[1592]: linux-amd64/README.md Mar 12 01:36:48.092288 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 01:36:48.474009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:36:48.477981 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 01:36:48.480828 systemd[1]: Startup finished in 8.696s (kernel) + 4.761s (userspace) = 13.458s. Mar 12 01:36:48.482351 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:36:48.939286 kubelet[1682]: E0312 01:36:48.939150 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:36:48.942780 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:36:48.943061 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:36:50.693940 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 01:36:50.705040 systemd[1]: Started sshd@0-10.0.0.145:22-10.0.0.1:53784.service - OpenSSH per-connection server daemon (10.0.0.1:53784). Mar 12 01:36:50.752584 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 53784 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:36:50.755318 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:36:50.767675 systemd-logind[1574]: New session 1 of user core. Mar 12 01:36:50.769320 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 01:36:50.782043 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 01:36:50.801364 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 01:36:50.808092 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 01:36:50.816269 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 01:36:50.943054 systemd[1701]: Queued start job for default target default.target. Mar 12 01:36:50.943526 systemd[1701]: Created slice app.slice - User Application Slice. Mar 12 01:36:50.943547 systemd[1701]: Reached target paths.target - Paths. Mar 12 01:36:50.943559 systemd[1701]: Reached target timers.target - Timers. Mar 12 01:36:50.953770 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 01:36:50.961151 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 01:36:50.961261 systemd[1701]: Reached target sockets.target - Sockets. Mar 12 01:36:50.961275 systemd[1701]: Reached target basic.target - Basic System. Mar 12 01:36:50.961315 systemd[1701]: Reached target default.target - Main User Target. Mar 12 01:36:50.961350 systemd[1701]: Startup finished in 135ms. Mar 12 01:36:50.961999 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 01:36:50.963914 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 01:36:51.021993 systemd[1]: Started sshd@1-10.0.0.145:22-10.0.0.1:53790.service - OpenSSH per-connection server daemon (10.0.0.1:53790). Mar 12 01:36:51.062949 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 53790 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:36:51.065141 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:36:51.070700 systemd-logind[1574]: New session 2 of user core. Mar 12 01:36:51.083027 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 01:36:51.149358 sshd[1713]: pam_unix(sshd:session): session closed for user core Mar 12 01:36:51.163952 systemd[1]: Started sshd@2-10.0.0.145:22-10.0.0.1:53800.service - OpenSSH per-connection server daemon (10.0.0.1:53800). Mar 12 01:36:51.164878 systemd[1]: sshd@1-10.0.0.145:22-10.0.0.1:53790.service: Deactivated successfully. Mar 12 01:36:51.168776 systemd-logind[1574]: Session 2 logged out. Waiting for processes to exit. Mar 12 01:36:51.169552 systemd[1]: session-2.scope: Deactivated successfully. Mar 12 01:36:51.171810 systemd-logind[1574]: Removed session 2. Mar 12 01:36:51.196593 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 53800 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:36:51.198406 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:36:51.203612 systemd-logind[1574]: New session 3 of user core. Mar 12 01:36:51.214252 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 01:36:51.269232 sshd[1718]: pam_unix(sshd:session): session closed for user core Mar 12 01:36:51.282520 systemd[1]: Started sshd@3-10.0.0.145:22-10.0.0.1:53816.service - OpenSSH per-connection server daemon (10.0.0.1:53816). Mar 12 01:36:51.283465 systemd[1]: sshd@2-10.0.0.145:22-10.0.0.1:53800.service: Deactivated successfully. Mar 12 01:36:51.288699 systemd[1]: session-3.scope: Deactivated successfully. Mar 12 01:36:51.292693 systemd-logind[1574]: Session 3 logged out. Waiting for processes to exit. Mar 12 01:36:51.298609 systemd-logind[1574]: Removed session 3. Mar 12 01:36:51.325053 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 53816 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:36:51.327247 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:36:51.334322 systemd-logind[1574]: New session 4 of user core. Mar 12 01:36:51.344039 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 01:36:51.408777 sshd[1726]: pam_unix(sshd:session): session closed for user core Mar 12 01:36:51.422089 systemd[1]: Started sshd@4-10.0.0.145:22-10.0.0.1:53824.service - OpenSSH per-connection server daemon (10.0.0.1:53824). Mar 12 01:36:51.422843 systemd[1]: sshd@3-10.0.0.145:22-10.0.0.1:53816.service: Deactivated successfully. Mar 12 01:36:51.427017 systemd-logind[1574]: Session 4 logged out. Waiting for processes to exit. Mar 12 01:36:51.429496 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 01:36:51.431235 systemd-logind[1574]: Removed session 4. Mar 12 01:36:51.475816 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 53824 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:36:51.477993 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:36:51.490703 systemd-logind[1574]: New session 5 of user core. Mar 12 01:36:51.501320 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 01:36:51.569978 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 01:36:51.570333 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:36:51.592513 sudo[1741]: pam_unix(sudo:session): session closed for user root Mar 12 01:36:51.594804 sshd[1734]: pam_unix(sshd:session): session closed for user core Mar 12 01:36:51.603991 systemd[1]: Started sshd@5-10.0.0.145:22-10.0.0.1:53836.service - OpenSSH per-connection server daemon (10.0.0.1:53836). Mar 12 01:36:51.605595 systemd[1]: sshd@4-10.0.0.145:22-10.0.0.1:53824.service: Deactivated successfully. Mar 12 01:36:51.608233 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 01:36:51.610876 systemd-logind[1574]: Session 5 logged out. Waiting for processes to exit. Mar 12 01:36:51.614902 systemd-logind[1574]: Removed session 5. Mar 12 01:36:51.652892 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 53836 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:36:51.654953 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:36:51.660547 systemd-logind[1574]: New session 6 of user core. Mar 12 01:36:51.681067 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 01:36:51.743462 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 01:36:51.743870 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:36:51.749598 sudo[1751]: pam_unix(sudo:session): session closed for user root Mar 12 01:36:51.759303 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 12 01:36:51.760003 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:36:51.781969 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 12 01:36:51.785466 auditctl[1754]: No rules Mar 12 01:36:51.786198 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 01:36:51.786795 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 12 01:36:51.791182 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:36:51.830387 augenrules[1773]: No rules Mar 12 01:36:51.831534 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:36:51.832962 sudo[1750]: pam_unix(sudo:session): session closed for user root Mar 12 01:36:51.835297 sshd[1744]: pam_unix(sshd:session): session closed for user core Mar 12 01:36:51.844972 systemd[1]: Started sshd@6-10.0.0.145:22-10.0.0.1:53846.service - OpenSSH per-connection server daemon (10.0.0.1:53846). Mar 12 01:36:51.845804 systemd[1]: sshd@5-10.0.0.145:22-10.0.0.1:53836.service: Deactivated successfully. Mar 12 01:36:51.847494 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 01:36:51.848518 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. Mar 12 01:36:51.850750 systemd-logind[1574]: Removed session 6. Mar 12 01:36:51.883611 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 53846 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:36:51.885761 sshd[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:36:51.891748 systemd-logind[1574]: New session 7 of user core. Mar 12 01:36:51.901918 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 01:36:51.960398 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 01:36:51.960886 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:36:52.244930 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 01:36:52.245190 (dockerd)[1805]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 01:36:52.520777 dockerd[1805]: time="2026-03-12T01:36:52.520569308Z" level=info msg="Starting up" Mar 12 01:36:52.789253 dockerd[1805]: time="2026-03-12T01:36:52.789045104Z" level=info msg="Loading containers: start." Mar 12 01:36:52.931714 kernel: Initializing XFRM netlink socket Mar 12 01:36:53.027523 systemd-networkd[1251]: docker0: Link UP Mar 12 01:36:53.055765 dockerd[1805]: time="2026-03-12T01:36:53.055607208Z" level=info msg="Loading containers: done." Mar 12 01:36:53.072039 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3994883639-merged.mount: Deactivated successfully. Mar 12 01:36:53.072967 dockerd[1805]: time="2026-03-12T01:36:53.072904688Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 01:36:53.073033 dockerd[1805]: time="2026-03-12T01:36:53.073018591Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 12 01:36:53.073163 dockerd[1805]: time="2026-03-12T01:36:53.073115952Z" level=info msg="Daemon has completed initialization" Mar 12 01:36:53.119593 dockerd[1805]: time="2026-03-12T01:36:53.119508951Z" level=info msg="API listen on /run/docker.sock" Mar 12 01:36:53.119709 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 01:36:53.600715 containerd[1595]: time="2026-03-12T01:36:53.600666427Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 12 01:36:54.104717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3699398770.mount: Deactivated successfully. Mar 12 01:36:55.423325 containerd[1595]: time="2026-03-12T01:36:55.423248563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:55.425387 containerd[1595]: time="2026-03-12T01:36:55.425299159Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 12 01:36:55.427090 containerd[1595]: time="2026-03-12T01:36:55.427003481Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:55.433560 containerd[1595]: time="2026-03-12T01:36:55.433462102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:55.435289 containerd[1595]: time="2026-03-12T01:36:55.434932398Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 1.83422826s" Mar 12 01:36:55.435289 containerd[1595]: time="2026-03-12T01:36:55.434997910Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 12 01:36:55.435904 containerd[1595]: time="2026-03-12T01:36:55.435820607Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 12 01:36:56.777737 containerd[1595]: time="2026-03-12T01:36:56.777502053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:56.779289 containerd[1595]: time="2026-03-12T01:36:56.779213835Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 12 01:36:56.780814 containerd[1595]: time="2026-03-12T01:36:56.780741340Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:56.786751 containerd[1595]: time="2026-03-12T01:36:56.786677910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:56.787922 containerd[1595]: time="2026-03-12T01:36:56.787881006Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.352020676s" Mar 12 01:36:56.787975 containerd[1595]: time="2026-03-12T01:36:56.787926702Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 12 01:36:56.788714 containerd[1595]: time="2026-03-12T01:36:56.788395815Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 12 01:36:57.997925 containerd[1595]: time="2026-03-12T01:36:57.997030798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:57.998591 containerd[1595]: time="2026-03-12T01:36:57.998506619Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 12 01:36:58.001844 containerd[1595]: time="2026-03-12T01:36:58.001776608Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:58.008911 containerd[1595]: time="2026-03-12T01:36:58.008721124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:36:58.018955 containerd[1595]: time="2026-03-12T01:36:58.017093550Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.228605863s" Mar 12 01:36:58.018955 containerd[1595]: time="2026-03-12T01:36:58.017171095Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 12 01:36:58.020083 containerd[1595]: time="2026-03-12T01:36:58.020010958Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 12 01:36:59.193318 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 01:36:59.210009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:36:59.464847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:36:59.472200 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:36:59.562466 kubelet[2033]: E0312 01:36:59.562362 2033 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:36:59.568864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:36:59.569162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:36:59.812248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1085442996.mount: Deactivated successfully. Mar 12 01:37:00.813106 containerd[1595]: time="2026-03-12T01:37:00.812799283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:00.814465 containerd[1595]: time="2026-03-12T01:37:00.814369485Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 12 01:37:00.816021 containerd[1595]: time="2026-03-12T01:37:00.815946598Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:00.821588 containerd[1595]: time="2026-03-12T01:37:00.819555236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:00.821588 containerd[1595]: time="2026-03-12T01:37:00.820460284Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 2.800343329s" Mar 12 01:37:00.821588 containerd[1595]: time="2026-03-12T01:37:00.820494779Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 12 01:37:00.821588 containerd[1595]: time="2026-03-12T01:37:00.821114366Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 12 01:37:01.616197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3100347041.mount: Deactivated successfully. Mar 12 01:37:04.058886 containerd[1595]: time="2026-03-12T01:37:04.058757422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:04.059983 containerd[1595]: time="2026-03-12T01:37:04.059896216Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 12 01:37:04.062347 containerd[1595]: time="2026-03-12T01:37:04.062239857Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:04.069127 containerd[1595]: time="2026-03-12T01:37:04.069005121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:04.070958 containerd[1595]: time="2026-03-12T01:37:04.070850403Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.249704449s" Mar 12 01:37:04.070958 containerd[1595]: time="2026-03-12T01:37:04.070911758Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 12 01:37:04.072024 containerd[1595]: time="2026-03-12T01:37:04.071938776Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 12 01:37:04.615016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2672784037.mount: Deactivated successfully. Mar 12 01:37:04.634578 containerd[1595]: time="2026-03-12T01:37:04.633125595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:04.638580 containerd[1595]: time="2026-03-12T01:37:04.636296726Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 12 01:37:04.640562 containerd[1595]: time="2026-03-12T01:37:04.640460684Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:04.648795 containerd[1595]: time="2026-03-12T01:37:04.648744232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:04.650390 containerd[1595]: time="2026-03-12T01:37:04.650295957Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 578.284225ms" Mar 12 01:37:04.650390 containerd[1595]: time="2026-03-12T01:37:04.650367681Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 12 01:37:04.651917 containerd[1595]: time="2026-03-12T01:37:04.651833368Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 12 01:37:05.157057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3658241396.mount: Deactivated successfully. Mar 12 01:37:07.873931 kernel: hrtimer: interrupt took 3657169 ns Mar 12 01:37:08.536237 containerd[1595]: time="2026-03-12T01:37:08.534480625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:08.540472 containerd[1595]: time="2026-03-12T01:37:08.539807561Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 12 01:37:08.542284 containerd[1595]: time="2026-03-12T01:37:08.542117084Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:08.546399 containerd[1595]: time="2026-03-12T01:37:08.546187114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:08.547879 containerd[1595]: time="2026-03-12T01:37:08.547726679Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 3.895846073s" Mar 12 01:37:08.547879 containerd[1595]: time="2026-03-12T01:37:08.547785789Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 12 01:37:09.636913 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 01:37:09.652977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:09.885291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:09.906724 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:37:10.165218 kubelet[2204]: E0312 01:37:10.164990 2204 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:37:10.170052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:37:10.171563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:37:12.554205 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:12.575098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:12.646256 systemd[1]: Reloading requested from client PID 2222 ('systemctl') (unit session-7.scope)... Mar 12 01:37:12.646313 systemd[1]: Reloading... Mar 12 01:37:12.820772 zram_generator::config[2264]: No configuration found. Mar 12 01:37:13.076900 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:37:13.181024 systemd[1]: Reloading finished in 533 ms. Mar 12 01:37:13.287822 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 12 01:37:13.287988 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 12 01:37:13.288589 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:13.302011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:13.604072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:13.624394 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:37:13.856388 kubelet[2320]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:37:13.856388 kubelet[2320]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 01:37:13.856388 kubelet[2320]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:37:13.857196 kubelet[2320]: I0312 01:37:13.856483 2320 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 01:37:14.510455 kubelet[2320]: I0312 01:37:14.510292 2320 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 01:37:14.510455 kubelet[2320]: I0312 01:37:14.510356 2320 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:37:14.511003 kubelet[2320]: I0312 01:37:14.510838 2320 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 01:37:14.545551 kubelet[2320]: E0312 01:37:14.545405 2320 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 01:37:14.547044 kubelet[2320]: I0312 01:37:14.546946 2320 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:37:14.558174 kubelet[2320]: E0312 01:37:14.558083 2320 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:37:14.558174 kubelet[2320]: I0312 01:37:14.558152 2320 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 12 01:37:14.565468 kubelet[2320]: I0312 01:37:14.565373 2320 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 01:37:14.567046 kubelet[2320]: I0312 01:37:14.566944 2320 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:37:14.567182 kubelet[2320]: I0312 01:37:14.567007 2320 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 12 01:37:14.567182 kubelet[2320]: I0312 01:37:14.567166 2320 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 01:37:14.567182 kubelet[2320]: I0312 01:37:14.567176 2320 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 01:37:14.567366 kubelet[2320]: I0312 01:37:14.567330 2320 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:37:14.575495 kubelet[2320]: I0312 01:37:14.575372 2320 kubelet.go:480] "Attempting to sync node with API server" Mar 12 01:37:14.575495 kubelet[2320]: I0312 01:37:14.575485 2320 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:37:14.575602 kubelet[2320]: I0312 01:37:14.575544 2320 kubelet.go:386] "Adding apiserver pod source" Mar 12 01:37:14.577999 kubelet[2320]: I0312 01:37:14.577940 2320 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:37:14.587715 kubelet[2320]: I0312 01:37:14.587381 2320 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:37:14.588682 kubelet[2320]: I0312 01:37:14.588499 2320 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:37:14.591869 kubelet[2320]: E0312 01:37:14.591795 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 01:37:14.592956 kubelet[2320]: E0312 01:37:14.592890 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 01:37:14.594194 kubelet[2320]: W0312 01:37:14.594137 2320 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 01:37:14.601451 kubelet[2320]: I0312 01:37:14.601378 2320 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 01:37:14.601551 kubelet[2320]: I0312 01:37:14.601509 2320 server.go:1289] "Started kubelet" Mar 12 01:37:14.602011 kubelet[2320]: I0312 01:37:14.601734 2320 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:37:14.602837 kubelet[2320]: I0312 01:37:14.602758 2320 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:37:14.602888 kubelet[2320]: I0312 01:37:14.602848 2320 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:37:14.611661 kubelet[2320]: I0312 01:37:14.611215 2320 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 01:37:14.616012 kubelet[2320]: I0312 01:37:14.613316 2320 server.go:317] "Adding debug handlers to kubelet server" Mar 12 01:37:14.617776 kubelet[2320]: E0312 01:37:14.616521 2320 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.145:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.145:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189bf44022ecdcef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-12 01:37:14.601454831 +0000 UTC m=+0.957409741,LastTimestamp:2026-03-12 01:37:14.601454831 +0000 UTC m=+0.957409741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 12 01:37:14.618392 kubelet[2320]: I0312 01:37:14.618277 2320 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:37:14.620457 kubelet[2320]: E0312 01:37:14.620441 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:14.624585 kubelet[2320]: I0312 01:37:14.624554 2320 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 01:37:14.624935 kubelet[2320]: I0312 01:37:14.624922 2320 reconciler.go:26] "Reconciler: start to sync state" Mar 12 01:37:14.625062 kubelet[2320]: I0312 01:37:14.625021 2320 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 01:37:14.626331 kubelet[2320]: E0312 01:37:14.626306 2320 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:37:14.626485 kubelet[2320]: E0312 01:37:14.626303 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="200ms" Mar 12 01:37:14.627077 kubelet[2320]: I0312 01:37:14.627051 2320 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:37:14.628110 kubelet[2320]: E0312 01:37:14.627174 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 01:37:14.631731 kubelet[2320]: I0312 01:37:14.628968 2320 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:37:14.631731 kubelet[2320]: I0312 01:37:14.628988 2320 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:37:14.670252 kubelet[2320]: I0312 01:37:14.670210 2320 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 01:37:14.674064 kubelet[2320]: I0312 01:37:14.674025 2320 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 01:37:14.674064 kubelet[2320]: I0312 01:37:14.674068 2320 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 01:37:14.674148 kubelet[2320]: I0312 01:37:14.674088 2320 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:37:14.674148 kubelet[2320]: I0312 01:37:14.674095 2320 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 01:37:14.674181 kubelet[2320]: E0312 01:37:14.674140 2320 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:37:14.676680 kubelet[2320]: E0312 01:37:14.676343 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 01:37:14.680526 kubelet[2320]: I0312 01:37:14.680499 2320 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 01:37:14.680851 kubelet[2320]: I0312 01:37:14.680762 2320 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 01:37:14.680851 kubelet[2320]: I0312 01:37:14.680849 2320 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:37:14.683536 kubelet[2320]: I0312 01:37:14.683463 2320 policy_none.go:49] "None policy: Start" Mar 12 01:37:14.683536 kubelet[2320]: I0312 01:37:14.683515 2320 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 01:37:14.683536 kubelet[2320]: I0312 01:37:14.683537 2320 state_mem.go:35] "Initializing new in-memory state store" Mar 12 01:37:14.691779 kubelet[2320]: E0312 01:37:14.691708 2320 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:37:14.692045 kubelet[2320]: I0312 01:37:14.691987 2320 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 01:37:14.692101 kubelet[2320]: I0312 01:37:14.692035 2320 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:37:14.694783 kubelet[2320]: I0312 01:37:14.694736 2320 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 01:37:14.695815 kubelet[2320]: E0312 01:37:14.695772 2320 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:37:14.695854 kubelet[2320]: E0312 01:37:14.695841 2320 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 12 01:37:14.802207 kubelet[2320]: E0312 01:37:14.795127 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:14.802207 kubelet[2320]: I0312 01:37:14.795795 2320 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:37:14.802207 kubelet[2320]: E0312 01:37:14.800915 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Mar 12 01:37:14.805564 kubelet[2320]: E0312 01:37:14.805465 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:14.807735 kubelet[2320]: E0312 01:37:14.807603 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:14.827552 kubelet[2320]: E0312 01:37:14.827377 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="400ms" Mar 12 01:37:14.926686 kubelet[2320]: I0312 01:37:14.926531 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:37:14.926686 kubelet[2320]: I0312 01:37:14.926615 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70b5038d1ef90b7036c1477c5d697d9a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"70b5038d1ef90b7036c1477c5d697d9a\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:14.927556 kubelet[2320]: I0312 01:37:14.926739 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70b5038d1ef90b7036c1477c5d697d9a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"70b5038d1ef90b7036c1477c5d697d9a\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:14.927556 kubelet[2320]: I0312 01:37:14.926771 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70b5038d1ef90b7036c1477c5d697d9a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"70b5038d1ef90b7036c1477c5d697d9a\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:14.927556 kubelet[2320]: I0312 01:37:14.926803 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:14.927556 kubelet[2320]: I0312 01:37:14.926830 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:14.927556 kubelet[2320]: I0312 01:37:14.926863 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:14.927814 kubelet[2320]: I0312 01:37:14.927023 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:14.927814 kubelet[2320]: I0312 01:37:14.927121 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:15.008471 kubelet[2320]: I0312 01:37:15.008336 2320 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:37:15.009987 kubelet[2320]: E0312 01:37:15.009722 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Mar 12 01:37:15.101021 kubelet[2320]: E0312 01:37:15.100373 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:15.102730 containerd[1595]: time="2026-03-12T01:37:15.102558720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:70b5038d1ef90b7036c1477c5d697d9a,Namespace:kube-system,Attempt:0,}" Mar 12 01:37:15.106990 kubelet[2320]: E0312 01:37:15.106910 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:15.107705 containerd[1595]: time="2026-03-12T01:37:15.107592561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 12 01:37:15.109191 kubelet[2320]: E0312 01:37:15.109092 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:15.109782 containerd[1595]: time="2026-03-12T01:37:15.109469455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 12 01:37:15.229505 kubelet[2320]: E0312 01:37:15.229257 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="800ms" Mar 12 01:37:15.412327 kubelet[2320]: I0312 01:37:15.412102 2320 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:37:15.412683 kubelet[2320]: E0312 01:37:15.412554 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Mar 12 01:37:15.604921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount842701717.mount: Deactivated successfully. Mar 12 01:37:15.616109 containerd[1595]: time="2026-03-12T01:37:15.616024920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:37:15.618378 containerd[1595]: time="2026-03-12T01:37:15.618287764Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:37:15.619972 containerd[1595]: time="2026-03-12T01:37:15.619896795Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:37:15.621106 containerd[1595]: time="2026-03-12T01:37:15.621077270Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:37:15.622289 containerd[1595]: time="2026-03-12T01:37:15.622194679Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:37:15.623371 containerd[1595]: time="2026-03-12T01:37:15.623334298Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:37:15.624708 containerd[1595]: time="2026-03-12T01:37:15.624661106Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 12 01:37:15.630549 containerd[1595]: time="2026-03-12T01:37:15.628510939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:37:15.630549 containerd[1595]: time="2026-03-12T01:37:15.630285432Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 522.525548ms" Mar 12 01:37:15.633613 containerd[1595]: time="2026-03-12T01:37:15.633540473Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 524.014342ms" Mar 12 01:37:15.635093 containerd[1595]: time="2026-03-12T01:37:15.635026421Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 532.251888ms" Mar 12 01:37:15.878251 kubelet[2320]: E0312 01:37:15.878144 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 01:37:15.968063 containerd[1595]: time="2026-03-12T01:37:15.967591495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:15.968063 containerd[1595]: time="2026-03-12T01:37:15.967706550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:15.968063 containerd[1595]: time="2026-03-12T01:37:15.967721258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:15.968063 containerd[1595]: time="2026-03-12T01:37:15.967859306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:15.971922 containerd[1595]: time="2026-03-12T01:37:15.969085435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:15.971922 containerd[1595]: time="2026-03-12T01:37:15.969142532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:15.971922 containerd[1595]: time="2026-03-12T01:37:15.969165725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:15.971922 containerd[1595]: time="2026-03-12T01:37:15.969317849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:15.998679 containerd[1595]: time="2026-03-12T01:37:15.996488861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:15.998679 containerd[1595]: time="2026-03-12T01:37:15.996606821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:15.998679 containerd[1595]: time="2026-03-12T01:37:15.996735742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:15.998679 containerd[1595]: time="2026-03-12T01:37:15.997089893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:16.030600 kubelet[2320]: E0312 01:37:16.030492 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="1.6s" Mar 12 01:37:16.098353 kubelet[2320]: E0312 01:37:16.098276 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 01:37:16.106822 kubelet[2320]: E0312 01:37:16.106767 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 01:37:16.157497 kubelet[2320]: E0312 01:37:16.157318 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 01:37:16.161356 containerd[1595]: time="2026-03-12T01:37:16.161215296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:70b5038d1ef90b7036c1477c5d697d9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"772b88a51ebdcf8cb5555d64410af3f743e3733ed6aae9be59990b05b770a3dd\"" Mar 12 01:37:16.163497 kubelet[2320]: E0312 01:37:16.163477 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:16.176028 containerd[1595]: time="2026-03-12T01:37:16.175964207Z" level=info msg="CreateContainer within sandbox \"772b88a51ebdcf8cb5555d64410af3f743e3733ed6aae9be59990b05b770a3dd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 01:37:16.178690 containerd[1595]: time="2026-03-12T01:37:16.178667848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"e75fbdce6ca5718239aee63ebb66cd5e6c5e8326d1f8fbfcdaf8405eb58a16c5\"" Mar 12 01:37:16.180060 kubelet[2320]: E0312 01:37:16.179957 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:16.184877 containerd[1595]: time="2026-03-12T01:37:16.184825301Z" level=info msg="CreateContainer within sandbox \"e75fbdce6ca5718239aee63ebb66cd5e6c5e8326d1f8fbfcdaf8405eb58a16c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 01:37:16.198814 containerd[1595]: time="2026-03-12T01:37:16.198772533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"6675e2a23684f538a1df0bd15673ec892ec546a60ddad639fc26bb58ca8b7c7a\"" Mar 12 01:37:16.200173 kubelet[2320]: E0312 01:37:16.200052 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:16.204792 containerd[1595]: time="2026-03-12T01:37:16.204744555Z" level=info msg="CreateContainer within sandbox \"772b88a51ebdcf8cb5555d64410af3f743e3733ed6aae9be59990b05b770a3dd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"998267453b1624edebc6768c1fd29becde3fb7ac6e2c458f31cc3c16e2acb6d6\"" Mar 12 01:37:16.205542 containerd[1595]: time="2026-03-12T01:37:16.205516290Z" level=info msg="StartContainer for \"998267453b1624edebc6768c1fd29becde3fb7ac6e2c458f31cc3c16e2acb6d6\"" Mar 12 01:37:16.207016 containerd[1595]: time="2026-03-12T01:37:16.206996485Z" level=info msg="CreateContainer within sandbox \"6675e2a23684f538a1df0bd15673ec892ec546a60ddad639fc26bb58ca8b7c7a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 01:37:16.214281 kubelet[2320]: I0312 01:37:16.214260 2320 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:37:16.214857 kubelet[2320]: E0312 01:37:16.214740 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Mar 12 01:37:16.218133 containerd[1595]: time="2026-03-12T01:37:16.218104667Z" level=info msg="CreateContainer within sandbox \"e75fbdce6ca5718239aee63ebb66cd5e6c5e8326d1f8fbfcdaf8405eb58a16c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ddae0f3f442492a8d87a66cfcd58b84a8781ce15921b92d359a3539c8a33f025\"" Mar 12 01:37:16.218579 containerd[1595]: time="2026-03-12T01:37:16.218561511Z" level=info msg="StartContainer for \"ddae0f3f442492a8d87a66cfcd58b84a8781ce15921b92d359a3539c8a33f025\"" Mar 12 01:37:16.230989 containerd[1595]: time="2026-03-12T01:37:16.230902505Z" level=info msg="CreateContainer within sandbox \"6675e2a23684f538a1df0bd15673ec892ec546a60ddad639fc26bb58ca8b7c7a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ac53b5556970bb1a92dd812f03da4ca8d03a17797a1fafdac8e185ecfed308f2\"" Mar 12 01:37:16.232725 containerd[1595]: time="2026-03-12T01:37:16.231772667Z" level=info msg="StartContainer for \"ac53b5556970bb1a92dd812f03da4ca8d03a17797a1fafdac8e185ecfed308f2\"" Mar 12 01:37:16.380177 containerd[1595]: time="2026-03-12T01:37:16.380081793Z" level=info msg="StartContainer for \"998267453b1624edebc6768c1fd29becde3fb7ac6e2c458f31cc3c16e2acb6d6\" returns successfully" Mar 12 01:37:16.420074 containerd[1595]: time="2026-03-12T01:37:16.419522593Z" level=info msg="StartContainer for \"ddae0f3f442492a8d87a66cfcd58b84a8781ce15921b92d359a3539c8a33f025\" returns successfully" Mar 12 01:37:16.447144 containerd[1595]: time="2026-03-12T01:37:16.447058547Z" level=info msg="StartContainer for \"ac53b5556970bb1a92dd812f03da4ca8d03a17797a1fafdac8e185ecfed308f2\" returns successfully" Mar 12 01:37:16.689092 kubelet[2320]: E0312 01:37:16.688787 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:16.689092 kubelet[2320]: E0312 01:37:16.688967 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:16.695827 kubelet[2320]: E0312 01:37:16.695737 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:16.695940 kubelet[2320]: E0312 01:37:16.695896 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:16.698257 kubelet[2320]: E0312 01:37:16.698177 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:16.699664 kubelet[2320]: E0312 01:37:16.698367 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:17.703887 kubelet[2320]: E0312 01:37:17.703778 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:17.704385 kubelet[2320]: E0312 01:37:17.704045 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:17.704385 kubelet[2320]: E0312 01:37:17.704328 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:17.704504 kubelet[2320]: E0312 01:37:17.704482 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:17.817560 kubelet[2320]: I0312 01:37:17.816945 2320 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:37:18.706442 kubelet[2320]: E0312 01:37:18.706330 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:18.706965 kubelet[2320]: E0312 01:37:18.706585 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:19.183688 kubelet[2320]: E0312 01:37:19.183533 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:19.184227 kubelet[2320]: E0312 01:37:19.184102 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:19.337772 kubelet[2320]: E0312 01:37:19.337705 2320 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 12 01:37:19.545692 kubelet[2320]: I0312 01:37:19.543926 2320 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 12 01:37:19.545692 kubelet[2320]: E0312 01:37:19.543970 2320 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 12 01:37:19.621087 kubelet[2320]: I0312 01:37:19.621026 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:19.646709 kubelet[2320]: E0312 01:37:19.643711 2320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:19.646709 kubelet[2320]: I0312 01:37:19.643742 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:19.648046 kubelet[2320]: E0312 01:37:19.647937 2320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:19.648046 kubelet[2320]: I0312 01:37:19.647987 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:37:19.662696 kubelet[2320]: E0312 01:37:19.662575 2320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 12 01:37:19.681954 kubelet[2320]: I0312 01:37:19.681897 2320 apiserver.go:52] "Watching apiserver" Mar 12 01:37:19.725555 kubelet[2320]: I0312 01:37:19.725465 2320 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 01:37:21.673767 systemd[1]: Reloading requested from client PID 2603 ('systemctl') (unit session-7.scope)... Mar 12 01:37:21.673870 systemd[1]: Reloading... Mar 12 01:37:21.752699 zram_generator::config[2645]: No configuration found. Mar 12 01:37:21.867436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:37:21.899036 kubelet[2320]: I0312 01:37:21.898945 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:21.906589 kubelet[2320]: E0312 01:37:21.906507 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:21.949271 systemd[1]: Reloading finished in 274 ms. Mar 12 01:37:21.993451 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:22.015573 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 01:37:22.016183 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:22.029972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:22.217292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:22.222445 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:37:22.274938 kubelet[2697]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:37:22.274938 kubelet[2697]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 01:37:22.274938 kubelet[2697]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:37:22.275534 kubelet[2697]: I0312 01:37:22.274997 2697 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 01:37:22.281802 kubelet[2697]: I0312 01:37:22.281767 2697 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 12 01:37:22.281802 kubelet[2697]: I0312 01:37:22.281797 2697 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:37:22.281997 kubelet[2697]: I0312 01:37:22.281963 2697 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 01:37:22.283455 kubelet[2697]: I0312 01:37:22.283326 2697 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 01:37:22.286109 kubelet[2697]: I0312 01:37:22.286037 2697 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:37:22.297504 kubelet[2697]: E0312 01:37:22.297400 2697 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:37:22.297504 kubelet[2697]: I0312 01:37:22.297488 2697 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 12 01:37:22.307132 kubelet[2697]: I0312 01:37:22.307046 2697 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 12 01:37:22.308211 kubelet[2697]: I0312 01:37:22.308080 2697 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:37:22.308428 kubelet[2697]: I0312 01:37:22.308143 2697 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 12 01:37:22.308564 kubelet[2697]: I0312 01:37:22.308430 2697 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 01:37:22.308564 kubelet[2697]: I0312 01:37:22.308446 2697 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 01:37:22.308564 kubelet[2697]: I0312 01:37:22.308515 2697 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:37:22.308941 kubelet[2697]: I0312 01:37:22.308880 2697 kubelet.go:480] "Attempting to sync node with API server" Mar 12 01:37:22.308941 kubelet[2697]: I0312 01:37:22.308924 2697 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:37:22.309020 kubelet[2697]: I0312 01:37:22.308963 2697 kubelet.go:386] "Adding apiserver pod source" Mar 12 01:37:22.309020 kubelet[2697]: I0312 01:37:22.308979 2697 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:37:22.314849 kubelet[2697]: I0312 01:37:22.314798 2697 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:37:22.316718 kubelet[2697]: I0312 01:37:22.316137 2697 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:37:22.322673 kubelet[2697]: I0312 01:37:22.322574 2697 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 12 01:37:22.322831 kubelet[2697]: I0312 01:37:22.322784 2697 server.go:1289] "Started kubelet" Mar 12 01:37:22.322859 kubelet[2697]: I0312 01:37:22.322835 2697 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:37:22.323026 kubelet[2697]: I0312 01:37:22.322887 2697 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:37:22.324717 kubelet[2697]: I0312 01:37:22.324154 2697 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:37:22.326258 kubelet[2697]: I0312 01:37:22.325399 2697 server.go:317] "Adding debug handlers to kubelet server" Mar 12 01:37:22.328976 kubelet[2697]: I0312 01:37:22.328834 2697 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 01:37:22.329376 kubelet[2697]: I0312 01:37:22.329331 2697 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:37:22.330100 kubelet[2697]: I0312 01:37:22.329884 2697 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 12 01:37:22.330930 kubelet[2697]: I0312 01:37:22.330841 2697 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:37:22.331085 kubelet[2697]: I0312 01:37:22.331026 2697 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 12 01:37:22.331148 kubelet[2697]: I0312 01:37:22.331064 2697 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:37:22.334675 kubelet[2697]: I0312 01:37:22.333937 2697 reconciler.go:26] "Reconciler: start to sync state" Mar 12 01:37:22.337132 kubelet[2697]: I0312 01:37:22.337071 2697 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:37:22.339899 kubelet[2697]: E0312 01:37:22.339806 2697 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:37:22.362713 kubelet[2697]: I0312 01:37:22.362481 2697 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 12 01:37:22.364250 kubelet[2697]: I0312 01:37:22.364230 2697 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 12 01:37:22.364352 kubelet[2697]: I0312 01:37:22.364340 2697 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 12 01:37:22.364514 kubelet[2697]: I0312 01:37:22.364424 2697 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:37:22.364599 kubelet[2697]: I0312 01:37:22.364587 2697 kubelet.go:2436] "Starting kubelet main sync loop" Mar 12 01:37:22.364822 kubelet[2697]: E0312 01:37:22.364756 2697 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:37:22.413881 kubelet[2697]: I0312 01:37:22.413853 2697 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 01:37:22.413881 kubelet[2697]: I0312 01:37:22.413873 2697 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 01:37:22.413990 kubelet[2697]: I0312 01:37:22.413894 2697 state_mem.go:36] "Initialized new in-memory state store" Mar 12 01:37:22.414103 kubelet[2697]: I0312 01:37:22.414070 2697 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 01:37:22.414152 kubelet[2697]: I0312 01:37:22.414107 2697 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 01:37:22.414152 kubelet[2697]: I0312 01:37:22.414150 2697 policy_none.go:49] "None policy: Start" Mar 12 01:37:22.414245 kubelet[2697]: I0312 01:37:22.414160 2697 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 12 01:37:22.414245 kubelet[2697]: I0312 01:37:22.414204 2697 state_mem.go:35] "Initializing new in-memory state store" Mar 12 01:37:22.414384 kubelet[2697]: I0312 01:37:22.414285 2697 state_mem.go:75] "Updated machine memory state" Mar 12 01:37:22.416502 kubelet[2697]: E0312 01:37:22.416438 2697 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:37:22.416979 kubelet[2697]: I0312 01:37:22.416918 2697 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 01:37:22.417204 kubelet[2697]: I0312 01:37:22.417133 2697 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:37:22.418113 kubelet[2697]: I0312 01:37:22.418055 2697 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 01:37:22.420424 kubelet[2697]: E0312 01:37:22.420090 2697 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:37:22.466145 kubelet[2697]: I0312 01:37:22.466056 2697 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:37:22.466278 kubelet[2697]: I0312 01:37:22.466150 2697 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:22.466417 kubelet[2697]: I0312 01:37:22.466323 2697 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:22.473943 kubelet[2697]: E0312 01:37:22.473802 2697 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:22.524725 kubelet[2697]: I0312 01:37:22.524706 2697 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 12 01:37:22.532829 kubelet[2697]: I0312 01:37:22.532794 2697 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 12 01:37:22.533116 kubelet[2697]: I0312 01:37:22.532918 2697 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 12 01:37:22.535724 kubelet[2697]: I0312 01:37:22.534781 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:22.535724 kubelet[2697]: I0312 01:37:22.534823 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:22.535724 kubelet[2697]: I0312 01:37:22.534859 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:22.535724 kubelet[2697]: I0312 01:37:22.534882 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:22.535724 kubelet[2697]: I0312 01:37:22.534898 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70b5038d1ef90b7036c1477c5d697d9a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"70b5038d1ef90b7036c1477c5d697d9a\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:22.535854 kubelet[2697]: I0312 01:37:22.534914 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:22.535854 kubelet[2697]: I0312 01:37:22.534929 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:37:22.535854 kubelet[2697]: I0312 01:37:22.534941 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70b5038d1ef90b7036c1477c5d697d9a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"70b5038d1ef90b7036c1477c5d697d9a\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:22.535854 kubelet[2697]: I0312 01:37:22.534953 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70b5038d1ef90b7036c1477c5d697d9a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"70b5038d1ef90b7036c1477c5d697d9a\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:22.773086 kubelet[2697]: E0312 01:37:22.772976 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:22.773086 kubelet[2697]: E0312 01:37:22.773038 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:22.774910 kubelet[2697]: E0312 01:37:22.774858 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:23.310351 kubelet[2697]: I0312 01:37:23.310275 2697 apiserver.go:52] "Watching apiserver" Mar 12 01:37:23.332073 kubelet[2697]: I0312 01:37:23.331949 2697 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 12 01:37:23.382408 kubelet[2697]: E0312 01:37:23.382323 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:23.383353 kubelet[2697]: E0312 01:37:23.383289 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:23.383353 kubelet[2697]: I0312 01:37:23.383337 2697 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:23.390353 kubelet[2697]: E0312 01:37:23.390266 2697 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:23.390666 kubelet[2697]: E0312 01:37:23.390543 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:23.423617 kubelet[2697]: I0312 01:37:23.422862 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.422843664 podStartE2EDuration="1.422843664s" podCreationTimestamp="2026-03-12 01:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:37:23.412818192 +0000 UTC m=+1.185282832" watchObservedRunningTime="2026-03-12 01:37:23.422843664 +0000 UTC m=+1.195308324" Mar 12 01:37:23.437671 kubelet[2697]: I0312 01:37:23.434845 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.434831809 podStartE2EDuration="1.434831809s" podCreationTimestamp="2026-03-12 01:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:37:23.423037476 +0000 UTC m=+1.195502116" watchObservedRunningTime="2026-03-12 01:37:23.434831809 +0000 UTC m=+1.207296449" Mar 12 01:37:23.446725 kubelet[2697]: I0312 01:37:23.446686 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.446673065 podStartE2EDuration="2.446673065s" podCreationTimestamp="2026-03-12 01:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:37:23.435145598 +0000 UTC m=+1.207610239" watchObservedRunningTime="2026-03-12 01:37:23.446673065 +0000 UTC m=+1.219137705" Mar 12 01:37:24.383841 kubelet[2697]: E0312 01:37:24.383714 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:24.383841 kubelet[2697]: E0312 01:37:24.383830 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:25.386197 kubelet[2697]: E0312 01:37:25.386159 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:26.805617 kubelet[2697]: E0312 01:37:26.802751 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:27.389499 kubelet[2697]: E0312 01:37:27.389391 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:27.559495 kubelet[2697]: E0312 01:37:27.559096 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:28.390729 kubelet[2697]: E0312 01:37:28.390544 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:28.523847 kubelet[2697]: I0312 01:37:28.523791 2697 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 01:37:28.524260 containerd[1595]: time="2026-03-12T01:37:28.524181203Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 01:37:28.524581 kubelet[2697]: I0312 01:37:28.524324 2697 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 01:37:29.683701 kubelet[2697]: I0312 01:37:29.683595 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4211388c-c9c7-439c-b9c4-3d5821cb9b62-kube-proxy\") pod \"kube-proxy-rxpzm\" (UID: \"4211388c-c9c7-439c-b9c4-3d5821cb9b62\") " pod="kube-system/kube-proxy-rxpzm" Mar 12 01:37:29.683701 kubelet[2697]: I0312 01:37:29.683701 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4211388c-c9c7-439c-b9c4-3d5821cb9b62-xtables-lock\") pod \"kube-proxy-rxpzm\" (UID: \"4211388c-c9c7-439c-b9c4-3d5821cb9b62\") " pod="kube-system/kube-proxy-rxpzm" Mar 12 01:37:29.684280 kubelet[2697]: I0312 01:37:29.683718 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4211388c-c9c7-439c-b9c4-3d5821cb9b62-lib-modules\") pod \"kube-proxy-rxpzm\" (UID: \"4211388c-c9c7-439c-b9c4-3d5821cb9b62\") " pod="kube-system/kube-proxy-rxpzm" Mar 12 01:37:29.684280 kubelet[2697]: I0312 01:37:29.683732 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd8dx\" (UniqueName: \"kubernetes.io/projected/4211388c-c9c7-439c-b9c4-3d5821cb9b62-kube-api-access-xd8dx\") pod \"kube-proxy-rxpzm\" (UID: \"4211388c-c9c7-439c-b9c4-3d5821cb9b62\") " pod="kube-system/kube-proxy-rxpzm" Mar 12 01:37:29.784036 kubelet[2697]: I0312 01:37:29.783910 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpvlf\" (UniqueName: \"kubernetes.io/projected/e9c3af44-cb2a-4936-87e7-8dfa07eedcca-kube-api-access-kpvlf\") pod \"tigera-operator-6bf85f8dd-q8q8k\" (UID: \"e9c3af44-cb2a-4936-87e7-8dfa07eedcca\") " pod="tigera-operator/tigera-operator-6bf85f8dd-q8q8k" Mar 12 01:37:29.784036 kubelet[2697]: I0312 01:37:29.783983 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e9c3af44-cb2a-4936-87e7-8dfa07eedcca-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-q8q8k\" (UID: \"e9c3af44-cb2a-4936-87e7-8dfa07eedcca\") " pod="tigera-operator/tigera-operator-6bf85f8dd-q8q8k" Mar 12 01:37:29.922192 kubelet[2697]: E0312 01:37:29.922096 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:29.923176 containerd[1595]: time="2026-03-12T01:37:29.922856959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rxpzm,Uid:4211388c-c9c7-439c-b9c4-3d5821cb9b62,Namespace:kube-system,Attempt:0,}" Mar 12 01:37:29.956208 containerd[1595]: time="2026-03-12T01:37:29.955972800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:29.956208 containerd[1595]: time="2026-03-12T01:37:29.956038027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:29.956208 containerd[1595]: time="2026-03-12T01:37:29.956062502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:29.956499 containerd[1595]: time="2026-03-12T01:37:29.956354290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:29.992383 containerd[1595]: time="2026-03-12T01:37:29.992304933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-q8q8k,Uid:e9c3af44-cb2a-4936-87e7-8dfa07eedcca,Namespace:tigera-operator,Attempt:0,}" Mar 12 01:37:30.016212 containerd[1595]: time="2026-03-12T01:37:30.016139953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rxpzm,Uid:4211388c-c9c7-439c-b9c4-3d5821cb9b62,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4835daee88720416f3160db5c035fce94198dcab05e524ef41de25962471657\"" Mar 12 01:37:30.017196 kubelet[2697]: E0312 01:37:30.017171 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:30.023716 containerd[1595]: time="2026-03-12T01:37:30.023484419Z" level=info msg="CreateContainer within sandbox \"e4835daee88720416f3160db5c035fce94198dcab05e524ef41de25962471657\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 01:37:30.033523 containerd[1595]: time="2026-03-12T01:37:30.033324161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:30.033523 containerd[1595]: time="2026-03-12T01:37:30.033391814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:30.033523 containerd[1595]: time="2026-03-12T01:37:30.033411330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:30.033845 containerd[1595]: time="2026-03-12T01:37:30.033596324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:30.050435 containerd[1595]: time="2026-03-12T01:37:30.050372072Z" level=info msg="CreateContainer within sandbox \"e4835daee88720416f3160db5c035fce94198dcab05e524ef41de25962471657\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"78dfe6ee27fa3cc122d15558fd04bd712b0f51d1887c3af0c22309578f8fd834\"" Mar 12 01:37:30.052412 containerd[1595]: time="2026-03-12T01:37:30.051765053Z" level=info msg="StartContainer for \"78dfe6ee27fa3cc122d15558fd04bd712b0f51d1887c3af0c22309578f8fd834\"" Mar 12 01:37:30.119671 containerd[1595]: time="2026-03-12T01:37:30.119497221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-q8q8k,Uid:e9c3af44-cb2a-4936-87e7-8dfa07eedcca,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2cd36599541ec3c8be7293daf5780cf8adbef903b6a8d4a6901cc71cbaa1719d\"" Mar 12 01:37:30.126174 containerd[1595]: time="2026-03-12T01:37:30.126009845Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 12 01:37:30.162476 containerd[1595]: time="2026-03-12T01:37:30.162295521Z" level=info msg="StartContainer for \"78dfe6ee27fa3cc122d15558fd04bd712b0f51d1887c3af0c22309578f8fd834\" returns successfully" Mar 12 01:37:30.396857 kubelet[2697]: E0312 01:37:30.396727 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:30.406412 kubelet[2697]: I0312 01:37:30.406329 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rxpzm" podStartSLOduration=1.406316251 podStartE2EDuration="1.406316251s" podCreationTimestamp="2026-03-12 01:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:37:30.406220579 +0000 UTC m=+8.178685218" watchObservedRunningTime="2026-03-12 01:37:30.406316251 +0000 UTC m=+8.178780901" Mar 12 01:37:30.884547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1692074190.mount: Deactivated successfully. Mar 12 01:37:32.274173 containerd[1595]: time="2026-03-12T01:37:32.274046064Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:32.275398 containerd[1595]: time="2026-03-12T01:37:32.275358039Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 12 01:37:32.276828 containerd[1595]: time="2026-03-12T01:37:32.276755138Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:32.279604 containerd[1595]: time="2026-03-12T01:37:32.279555699Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:32.280663 containerd[1595]: time="2026-03-12T01:37:32.280566730Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.154466s" Mar 12 01:37:32.280663 containerd[1595]: time="2026-03-12T01:37:32.280609738Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 12 01:37:32.285554 containerd[1595]: time="2026-03-12T01:37:32.285399294Z" level=info msg="CreateContainer within sandbox \"2cd36599541ec3c8be7293daf5780cf8adbef903b6a8d4a6901cc71cbaa1719d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 12 01:37:32.299903 containerd[1595]: time="2026-03-12T01:37:32.299827416Z" level=info msg="CreateContainer within sandbox \"2cd36599541ec3c8be7293daf5780cf8adbef903b6a8d4a6901cc71cbaa1719d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f0f271fc5737d34c31a9afa4259ca851c5a0f69fd44ae175e288958f2ca4bf68\"" Mar 12 01:37:32.300553 containerd[1595]: time="2026-03-12T01:37:32.300470266Z" level=info msg="StartContainer for \"f0f271fc5737d34c31a9afa4259ca851c5a0f69fd44ae175e288958f2ca4bf68\"" Mar 12 01:37:32.372284 containerd[1595]: time="2026-03-12T01:37:32.372231913Z" level=info msg="StartContainer for \"f0f271fc5737d34c31a9afa4259ca851c5a0f69fd44ae175e288958f2ca4bf68\" returns successfully" Mar 12 01:37:33.046940 update_engine[1579]: I20260312 01:37:33.046793 1579 update_attempter.cc:509] Updating boot flags... Mar 12 01:37:33.083770 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3059) Mar 12 01:37:33.135865 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3059) Mar 12 01:37:34.819186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0f271fc5737d34c31a9afa4259ca851c5a0f69fd44ae175e288958f2ca4bf68-rootfs.mount: Deactivated successfully. Mar 12 01:37:35.025069 kubelet[2697]: E0312 01:37:35.024943 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:35.066936 containerd[1595]: time="2026-03-12T01:37:35.064521181Z" level=info msg="shim disconnected" id=f0f271fc5737d34c31a9afa4259ca851c5a0f69fd44ae175e288958f2ca4bf68 namespace=k8s.io Mar 12 01:37:35.067824 containerd[1595]: time="2026-03-12T01:37:35.066978852Z" level=warning msg="cleaning up after shim disconnected" id=f0f271fc5737d34c31a9afa4259ca851c5a0f69fd44ae175e288958f2ca4bf68 namespace=k8s.io Mar 12 01:37:35.067824 containerd[1595]: time="2026-03-12T01:37:35.066997816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:37:35.072325 kubelet[2697]: I0312 01:37:35.072155 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-q8q8k" podStartSLOduration=3.912553091 podStartE2EDuration="6.072138168s" podCreationTimestamp="2026-03-12 01:37:29 +0000 UTC" firstStartedPulling="2026-03-12 01:37:30.121806951 +0000 UTC m=+7.894271591" lastFinishedPulling="2026-03-12 01:37:32.281392028 +0000 UTC m=+10.053856668" observedRunningTime="2026-03-12 01:37:32.421902242 +0000 UTC m=+10.194366883" watchObservedRunningTime="2026-03-12 01:37:35.072138168 +0000 UTC m=+12.844603350" Mar 12 01:37:35.413382 kubelet[2697]: E0312 01:37:35.413222 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:35.413382 kubelet[2697]: I0312 01:37:35.413261 2697 scope.go:117] "RemoveContainer" containerID="f0f271fc5737d34c31a9afa4259ca851c5a0f69fd44ae175e288958f2ca4bf68" Mar 12 01:37:35.416037 containerd[1595]: time="2026-03-12T01:37:35.415615800Z" level=info msg="CreateContainer within sandbox \"2cd36599541ec3c8be7293daf5780cf8adbef903b6a8d4a6901cc71cbaa1719d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 12 01:37:35.432975 containerd[1595]: time="2026-03-12T01:37:35.432894236Z" level=info msg="CreateContainer within sandbox \"2cd36599541ec3c8be7293daf5780cf8adbef903b6a8d4a6901cc71cbaa1719d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"f59235c0df26651525f3ac19d9aaeed6ac34af030974a47fd0cdaf8b4c8bf3de\"" Mar 12 01:37:35.436084 containerd[1595]: time="2026-03-12T01:37:35.435010537Z" level=info msg="StartContainer for \"f59235c0df26651525f3ac19d9aaeed6ac34af030974a47fd0cdaf8b4c8bf3de\"" Mar 12 01:37:35.523048 containerd[1595]: time="2026-03-12T01:37:35.522958265Z" level=info msg="StartContainer for \"f59235c0df26651525f3ac19d9aaeed6ac34af030974a47fd0cdaf8b4c8bf3de\" returns successfully" Mar 12 01:37:38.939877 sudo[1786]: pam_unix(sudo:session): session closed for user root Mar 12 01:37:38.943291 sshd[1780]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:38.949599 systemd[1]: sshd@6-10.0.0.145:22-10.0.0.1:53846.service: Deactivated successfully. Mar 12 01:37:38.956043 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 01:37:38.957124 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. Mar 12 01:37:38.959758 systemd-logind[1574]: Removed session 7. Mar 12 01:37:43.262815 kubelet[2697]: I0312 01:37:43.262741 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/87b36192-ef80-4e21-9c2c-5e1470f8bbdd-typha-certs\") pod \"calico-typha-8685b4bf8d-fs6lm\" (UID: \"87b36192-ef80-4e21-9c2c-5e1470f8bbdd\") " pod="calico-system/calico-typha-8685b4bf8d-fs6lm" Mar 12 01:37:43.263417 kubelet[2697]: I0312 01:37:43.262817 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zmd7\" (UniqueName: \"kubernetes.io/projected/87b36192-ef80-4e21-9c2c-5e1470f8bbdd-kube-api-access-7zmd7\") pod \"calico-typha-8685b4bf8d-fs6lm\" (UID: \"87b36192-ef80-4e21-9c2c-5e1470f8bbdd\") " pod="calico-system/calico-typha-8685b4bf8d-fs6lm" Mar 12 01:37:43.263417 kubelet[2697]: I0312 01:37:43.262884 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87b36192-ef80-4e21-9c2c-5e1470f8bbdd-tigera-ca-bundle\") pod \"calico-typha-8685b4bf8d-fs6lm\" (UID: \"87b36192-ef80-4e21-9c2c-5e1470f8bbdd\") " pod="calico-system/calico-typha-8685b4bf8d-fs6lm" Mar 12 01:37:43.365398 kubelet[2697]: I0312 01:37:43.363233 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/592a07b2-5548-4ea1-b3bd-a5838552b7b6-cni-log-dir\") pod \"calico-node-cpt86\" (UID: \"592a07b2-5548-4ea1-b3bd-a5838552b7b6\") " pod="calico-system/calico-node-cpt86" Mar 12 01:37:43.365398 kubelet[2697]: I0312 01:37:43.363317 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/592a07b2-5548-4ea1-b3bd-a5838552b7b6-flexvol-driver-host\") pod \"calico-node-cpt86\" (UID: \"592a07b2-5548-4ea1-b3bd-a5838552b7b6\") " pod="calico-system/calico-node-cpt86" Mar 12 01:37:43.365398 kubelet[2697]: I0312 01:37:43.363337 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/592a07b2-5548-4ea1-b3bd-a5838552b7b6-var-lib-calico\") pod \"calico-node-cpt86\" (UID: \"592a07b2-5548-4ea1-b3bd-a5838552b7b6\") " pod="calico-system/calico-node-cpt86" Mar 12 01:37:43.365398 kubelet[2697]: I0312 01:37:43.363352 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbb4c\" (UniqueName: \"kubernetes.io/projected/592a07b2-5548-4ea1-b3bd-a5838552b7b6-kube-api-access-zbb4c\") pod \"calico-node-cpt86\" (UID: \"592a07b2-5548-4ea1-b3bd-a5838552b7b6\") " pod="calico-system/calico-node-cpt86" Mar 12 01:37:43.365398 kubelet[2697]: I0312 01:37:43.363367 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/592a07b2-5548-4ea1-b3bd-a5838552b7b6-cni-bin-dir\") pod \"calico-node-cpt86\" (UID: \"592a07b2-5548-4ea1-b3bd-a5838552b7b6\") " pod="calico-system/calico-node-cpt86" Mar 12 01:37:43.365970 kubelet[2697]: I0312 01:37:43.363381 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/592a07b2-5548-4ea1-b3bd-a5838552b7b6-sys-fs\") pod \"calico-node-cpt86\" (UID: \"592a07b2-5548-4ea1-b3bd-a5838552b7b6\") " pod="calico-system/calico-node-cpt86" Mar 12 01:37:43.365970 kubelet[2697]: I0312 01:37:43.363405 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/592a07b2-5548-4ea1-b3bd-a5838552b7b6-cni-net-dir\") pod \"calico-node-cpt86\" (UID: \"592a07b2-5548-4ea1-b3bd-a5838552b7b6\") " pod="calico-system/calico-node-cpt86" Mar 12 01:37:43.365970 kubelet[2697]: I0312 01:37:43.363418 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/592a07b2-5548-4ea1-b3bd-a5838552b7b6-node-certs\") pod \"calico-node-cpt86\" (UID: \"592a07b2-5548-4ea1-b3bd-a5838552b7b6\") " pod="calico-system/calico-node-cpt86" Mar 12 01:37:43.365970 kubelet[2697]: I0312 01:37:43.363431 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/592a07b2-5548-4ea1-b3bd-a5838552b7b6-bpffs\") pod \"calico-node-cpt86\" (UID: \"592a07b2-5548-4ea1-b3bd-a5838552b7b6\") " pod="calico-system/calico-node-cpt86" Mar 12 01:37:43.365970 kubelet[2697]: I0312 01:37:43.363446 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/592a07b2-5548-4ea1-b3bd-a5838552b7b6-lib-modules\") pod \"calico-node-cpt86\" (UID: \"592a07b2-5548-4ea1-b3bd-a5838552b7b6\") " pod="calico-system/calico-node-cpt86" Mar 12 01:37:43.365970 kubelet[2697]: I0312 01:37:43.363458 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/592a07b2-5548-4ea1-b3bd-a5838552b7b6-policysync\") pod \"calico-node-cpt86\" (UID: \"592a07b2-5548-4ea1-b3bd-a5838552b7b6\") " pod="calico-system/calico-node-cpt86" Mar 12 01:37:43.366728 kubelet[2697]: I0312 01:37:43.363470 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/592a07b2-5548-4ea1-b3bd-a5838552b7b6-xtables-lock\") pod \"calico-node-cpt86\" (UID: \"592a07b2-5548-4ea1-b3bd-a5838552b7b6\") " pod="calico-system/calico-node-cpt86" Mar 12 01:37:43.366728 kubelet[2697]: I0312 01:37:43.363489 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/592a07b2-5548-4ea1-b3bd-a5838552b7b6-var-run-calico\") pod \"calico-node-cpt86\" (UID: \"592a07b2-5548-4ea1-b3bd-a5838552b7b6\") " pod="calico-system/calico-node-cpt86" Mar 12 01:37:43.366728 kubelet[2697]: I0312 01:37:43.363511 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/592a07b2-5548-4ea1-b3bd-a5838552b7b6-nodeproc\") pod \"calico-node-cpt86\" (UID: \"592a07b2-5548-4ea1-b3bd-a5838552b7b6\") " pod="calico-system/calico-node-cpt86" Mar 12 01:37:43.366728 kubelet[2697]: I0312 01:37:43.363524 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/592a07b2-5548-4ea1-b3bd-a5838552b7b6-tigera-ca-bundle\") pod \"calico-node-cpt86\" (UID: \"592a07b2-5548-4ea1-b3bd-a5838552b7b6\") " pod="calico-system/calico-node-cpt86" Mar 12 01:37:43.376726 kubelet[2697]: E0312 01:37:43.376686 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-84r2m" podUID="beb8f589-ac04-487b-b459-4f523ba3d20d" Mar 12 01:37:43.465022 kubelet[2697]: I0312 01:37:43.464860 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/beb8f589-ac04-487b-b459-4f523ba3d20d-socket-dir\") pod \"csi-node-driver-84r2m\" (UID: \"beb8f589-ac04-487b-b459-4f523ba3d20d\") " pod="calico-system/csi-node-driver-84r2m" Mar 12 01:37:43.465022 kubelet[2697]: I0312 01:37:43.464944 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7599f\" (UniqueName: \"kubernetes.io/projected/beb8f589-ac04-487b-b459-4f523ba3d20d-kube-api-access-7599f\") pod \"csi-node-driver-84r2m\" (UID: \"beb8f589-ac04-487b-b459-4f523ba3d20d\") " pod="calico-system/csi-node-driver-84r2m" Mar 12 01:37:43.465022 kubelet[2697]: I0312 01:37:43.464999 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/beb8f589-ac04-487b-b459-4f523ba3d20d-registration-dir\") pod \"csi-node-driver-84r2m\" (UID: \"beb8f589-ac04-487b-b459-4f523ba3d20d\") " pod="calico-system/csi-node-driver-84r2m" Mar 12 01:37:43.465287 kubelet[2697]: I0312 01:37:43.465130 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/beb8f589-ac04-487b-b459-4f523ba3d20d-varrun\") pod \"csi-node-driver-84r2m\" (UID: \"beb8f589-ac04-487b-b459-4f523ba3d20d\") " pod="calico-system/csi-node-driver-84r2m" Mar 12 01:37:43.465287 kubelet[2697]: I0312 01:37:43.465196 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/beb8f589-ac04-487b-b459-4f523ba3d20d-kubelet-dir\") pod \"csi-node-driver-84r2m\" (UID: \"beb8f589-ac04-487b-b459-4f523ba3d20d\") " pod="calico-system/csi-node-driver-84r2m" Mar 12 01:37:43.467856 kubelet[2697]: E0312 01:37:43.467820 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.467856 kubelet[2697]: W0312 01:37:43.467847 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.468078 kubelet[2697]: E0312 01:37:43.467927 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.471884 kubelet[2697]: E0312 01:37:43.471852 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.471937 kubelet[2697]: W0312 01:37:43.471886 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.471937 kubelet[2697]: E0312 01:37:43.471904 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.476521 kubelet[2697]: E0312 01:37:43.476470 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.476521 kubelet[2697]: W0312 01:37:43.476488 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.476521 kubelet[2697]: E0312 01:37:43.476506 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.506228 kubelet[2697]: E0312 01:37:43.506122 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:43.507006 containerd[1595]: time="2026-03-12T01:37:43.506869676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8685b4bf8d-fs6lm,Uid:87b36192-ef80-4e21-9c2c-5e1470f8bbdd,Namespace:calico-system,Attempt:0,}" Mar 12 01:37:43.543675 containerd[1595]: time="2026-03-12T01:37:43.543365613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:43.543675 containerd[1595]: time="2026-03-12T01:37:43.543468802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:43.543675 containerd[1595]: time="2026-03-12T01:37:43.543492998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:43.543893 containerd[1595]: time="2026-03-12T01:37:43.543794011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:43.564917 containerd[1595]: time="2026-03-12T01:37:43.564838280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cpt86,Uid:592a07b2-5548-4ea1-b3bd-a5838552b7b6,Namespace:calico-system,Attempt:0,}" Mar 12 01:37:43.566145 kubelet[2697]: E0312 01:37:43.565993 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.566145 kubelet[2697]: W0312 01:37:43.566036 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.566145 kubelet[2697]: E0312 01:37:43.566058 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.566712 kubelet[2697]: E0312 01:37:43.566600 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.566712 kubelet[2697]: W0312 01:37:43.566690 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.566958 kubelet[2697]: E0312 01:37:43.566712 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.567284 kubelet[2697]: E0312 01:37:43.567268 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.567284 kubelet[2697]: W0312 01:37:43.567282 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.567417 kubelet[2697]: E0312 01:37:43.567296 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.568272 kubelet[2697]: E0312 01:37:43.568175 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.568272 kubelet[2697]: W0312 01:37:43.568206 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.568272 kubelet[2697]: E0312 01:37:43.568222 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.569803 kubelet[2697]: E0312 01:37:43.569715 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.569871 kubelet[2697]: W0312 01:37:43.569755 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.569871 kubelet[2697]: E0312 01:37:43.569838 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.570461 kubelet[2697]: E0312 01:37:43.570396 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.570820 kubelet[2697]: W0312 01:37:43.570719 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.570820 kubelet[2697]: E0312 01:37:43.570751 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.571303 kubelet[2697]: E0312 01:37:43.571152 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.571303 kubelet[2697]: W0312 01:37:43.571162 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.571303 kubelet[2697]: E0312 01:37:43.571172 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.571467 kubelet[2697]: E0312 01:37:43.571374 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.571467 kubelet[2697]: W0312 01:37:43.571382 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.571467 kubelet[2697]: E0312 01:37:43.571390 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.571868 kubelet[2697]: E0312 01:37:43.571812 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.571868 kubelet[2697]: W0312 01:37:43.571827 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.571868 kubelet[2697]: E0312 01:37:43.571836 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.572444 kubelet[2697]: E0312 01:37:43.572255 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.572444 kubelet[2697]: W0312 01:37:43.572283 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.572444 kubelet[2697]: E0312 01:37:43.572293 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.573041 kubelet[2697]: E0312 01:37:43.572987 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.573041 kubelet[2697]: W0312 01:37:43.573005 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.573208 kubelet[2697]: E0312 01:37:43.573128 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.573759 kubelet[2697]: E0312 01:37:43.573604 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.573759 kubelet[2697]: W0312 01:37:43.573683 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.573759 kubelet[2697]: E0312 01:37:43.573694 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.574176 kubelet[2697]: E0312 01:37:43.574121 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.574176 kubelet[2697]: W0312 01:37:43.574165 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.574176 kubelet[2697]: E0312 01:37:43.574179 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.574810 kubelet[2697]: E0312 01:37:43.574742 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.574872 kubelet[2697]: W0312 01:37:43.574821 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.574872 kubelet[2697]: E0312 01:37:43.574833 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.575827 kubelet[2697]: E0312 01:37:43.575726 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.575827 kubelet[2697]: W0312 01:37:43.575736 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.575827 kubelet[2697]: E0312 01:37:43.575746 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.576294 kubelet[2697]: E0312 01:37:43.576221 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.576294 kubelet[2697]: W0312 01:37:43.576258 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.576294 kubelet[2697]: E0312 01:37:43.576272 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.576840 kubelet[2697]: E0312 01:37:43.576777 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.576840 kubelet[2697]: W0312 01:37:43.576821 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.576840 kubelet[2697]: E0312 01:37:43.576836 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.578916 kubelet[2697]: E0312 01:37:43.578861 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.578916 kubelet[2697]: W0312 01:37:43.578900 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.578916 kubelet[2697]: E0312 01:37:43.578914 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.579799 kubelet[2697]: E0312 01:37:43.579719 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.579799 kubelet[2697]: W0312 01:37:43.579755 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.579799 kubelet[2697]: E0312 01:37:43.579769 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.583843 kubelet[2697]: E0312 01:37:43.580128 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.583843 kubelet[2697]: W0312 01:37:43.580140 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.583843 kubelet[2697]: E0312 01:37:43.580152 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.583843 kubelet[2697]: E0312 01:37:43.581480 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.583843 kubelet[2697]: W0312 01:37:43.581491 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.583843 kubelet[2697]: E0312 01:37:43.581504 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.583843 kubelet[2697]: E0312 01:37:43.581951 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.583843 kubelet[2697]: W0312 01:37:43.581963 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.583843 kubelet[2697]: E0312 01:37:43.581975 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.583843 kubelet[2697]: E0312 01:37:43.582821 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.584212 kubelet[2697]: W0312 01:37:43.582832 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.584212 kubelet[2697]: E0312 01:37:43.582844 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.584212 kubelet[2697]: E0312 01:37:43.583213 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.584212 kubelet[2697]: W0312 01:37:43.583225 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.584212 kubelet[2697]: E0312 01:37:43.583238 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.584212 kubelet[2697]: E0312 01:37:43.583831 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.584212 kubelet[2697]: W0312 01:37:43.583841 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.584212 kubelet[2697]: E0312 01:37:43.583853 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.598021 kubelet[2697]: E0312 01:37:43.597943 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:43.598316 kubelet[2697]: W0312 01:37:43.598163 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:43.598316 kubelet[2697]: E0312 01:37:43.598192 2697 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:43.616699 containerd[1595]: time="2026-03-12T01:37:43.616026439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:43.616699 containerd[1595]: time="2026-03-12T01:37:43.616137063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:43.616699 containerd[1595]: time="2026-03-12T01:37:43.616165967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:43.616699 containerd[1595]: time="2026-03-12T01:37:43.616311324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:43.643586 containerd[1595]: time="2026-03-12T01:37:43.643436778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8685b4bf8d-fs6lm,Uid:87b36192-ef80-4e21-9c2c-5e1470f8bbdd,Namespace:calico-system,Attempt:0,} returns sandbox id \"df52486321c41e1b7b8f3ba9a538915d133a18589587f636d464a0d80d93e327\"" Mar 12 01:37:43.644666 kubelet[2697]: E0312 01:37:43.644530 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:43.647451 containerd[1595]: time="2026-03-12T01:37:43.647371224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 12 01:37:43.684295 containerd[1595]: time="2026-03-12T01:37:43.684173534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cpt86,Uid:592a07b2-5548-4ea1-b3bd-a5838552b7b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"198cff193f7ba4b574cc8e606b26c25cf51d2bc808080fa2085bfd6981d84d1f\"" Mar 12 01:37:44.853557 containerd[1595]: time="2026-03-12T01:37:44.853495739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:44.855419 containerd[1595]: time="2026-03-12T01:37:44.855294851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 12 01:37:44.857669 containerd[1595]: time="2026-03-12T01:37:44.857511591Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:44.861027 containerd[1595]: time="2026-03-12T01:37:44.860951177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:44.862111 containerd[1595]: time="2026-03-12T01:37:44.862004160Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.214584748s" Mar 12 01:37:44.862111 containerd[1595]: time="2026-03-12T01:37:44.862097282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 12 01:37:44.863611 containerd[1595]: time="2026-03-12T01:37:44.863344642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 12 01:37:44.880032 containerd[1595]: time="2026-03-12T01:37:44.879991535Z" level=info msg="CreateContainer within sandbox \"df52486321c41e1b7b8f3ba9a538915d133a18589587f636d464a0d80d93e327\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 12 01:37:44.905977 containerd[1595]: time="2026-03-12T01:37:44.905832571Z" level=info msg="CreateContainer within sandbox \"df52486321c41e1b7b8f3ba9a538915d133a18589587f636d464a0d80d93e327\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2b936c54c29a0d090c6610a94755f651f220e7855d0a984116cb756daac2aba5\"" Mar 12 01:37:44.907040 containerd[1595]: time="2026-03-12T01:37:44.906995415Z" level=info msg="StartContainer for \"2b936c54c29a0d090c6610a94755f651f220e7855d0a984116cb756daac2aba5\"" Mar 12 01:37:45.014544 containerd[1595]: time="2026-03-12T01:37:45.014468183Z" level=info msg="StartContainer for \"2b936c54c29a0d090c6610a94755f651f220e7855d0a984116cb756daac2aba5\" returns successfully" Mar 12 01:37:45.373419 kubelet[2697]: E0312 01:37:45.373330 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-84r2m" podUID="beb8f589-ac04-487b-b459-4f523ba3d20d" Mar 12 01:37:45.443001 containerd[1595]: time="2026-03-12T01:37:45.442913388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:45.444199 containerd[1595]: time="2026-03-12T01:37:45.444121578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 12 01:37:45.446086 containerd[1595]: time="2026-03-12T01:37:45.446010942Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:45.449305 containerd[1595]: time="2026-03-12T01:37:45.449199825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:45.450698 containerd[1595]: time="2026-03-12T01:37:45.450360827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 586.975811ms" Mar 12 01:37:45.450698 containerd[1595]: time="2026-03-12T01:37:45.450510183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 12 01:37:45.466289 containerd[1595]: time="2026-03-12T01:37:45.466226701Z" level=info msg="CreateContainer within sandbox \"198cff193f7ba4b574cc8e606b26c25cf51d2bc808080fa2085bfd6981d84d1f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 12 01:37:45.486592 containerd[1595]: time="2026-03-12T01:37:45.486479534Z" level=info msg="CreateContainer within sandbox \"198cff193f7ba4b574cc8e606b26c25cf51d2bc808080fa2085bfd6981d84d1f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"318b831d10c3aa569053c3840712cbb7de749e18c1cbfa365277a07cdf490826\"" Mar 12 01:37:45.487321 containerd[1595]: time="2026-03-12T01:37:45.487246169Z" level=info msg="StartContainer for \"318b831d10c3aa569053c3840712cbb7de749e18c1cbfa365277a07cdf490826\"" Mar 12 01:37:45.569578 kubelet[2697]: E0312 01:37:45.569500 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:45.661885 kubelet[2697]: I0312 01:37:45.658922 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8685b4bf8d-fs6lm" podStartSLOduration=1.376515824 podStartE2EDuration="2.593313254s" podCreationTimestamp="2026-03-12 01:37:43 +0000 UTC" firstStartedPulling="2026-03-12 01:37:43.646364485 +0000 UTC m=+21.418829135" lastFinishedPulling="2026-03-12 01:37:44.863161925 +0000 UTC m=+22.635626565" observedRunningTime="2026-03-12 01:37:45.593127511 +0000 UTC m=+23.365592152" watchObservedRunningTime="2026-03-12 01:37:45.593313254 +0000 UTC m=+23.365777894" Mar 12 01:37:45.664347 containerd[1595]: time="2026-03-12T01:37:45.664301512Z" level=info msg="StartContainer for \"318b831d10c3aa569053c3840712cbb7de749e18c1cbfa365277a07cdf490826\" returns successfully" Mar 12 01:37:45.709895 containerd[1595]: time="2026-03-12T01:37:45.709788098Z" level=info msg="shim disconnected" id=318b831d10c3aa569053c3840712cbb7de749e18c1cbfa365277a07cdf490826 namespace=k8s.io Mar 12 01:37:45.710123 containerd[1595]: time="2026-03-12T01:37:45.709909381Z" level=warning msg="cleaning up after shim disconnected" id=318b831d10c3aa569053c3840712cbb7de749e18c1cbfa365277a07cdf490826 namespace=k8s.io Mar 12 01:37:45.710123 containerd[1595]: time="2026-03-12T01:37:45.709922346Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:37:46.382293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-318b831d10c3aa569053c3840712cbb7de749e18c1cbfa365277a07cdf490826-rootfs.mount: Deactivated successfully. Mar 12 01:37:46.671429 kubelet[2697]: I0312 01:37:46.671151 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:37:46.672076 kubelet[2697]: E0312 01:37:46.671578 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:46.672528 containerd[1595]: time="2026-03-12T01:37:46.672381913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 12 01:37:47.365518 kubelet[2697]: E0312 01:37:47.365394 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-84r2m" podUID="beb8f589-ac04-487b-b459-4f523ba3d20d" Mar 12 01:37:49.365898 kubelet[2697]: E0312 01:37:49.365815 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-84r2m" podUID="beb8f589-ac04-487b-b459-4f523ba3d20d" Mar 12 01:37:50.622060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3385542025.mount: Deactivated successfully. Mar 12 01:37:50.788203 containerd[1595]: time="2026-03-12T01:37:50.787442261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:50.791475 containerd[1595]: time="2026-03-12T01:37:50.791434269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 12 01:37:50.791863 containerd[1595]: time="2026-03-12T01:37:50.791809591Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:50.795372 containerd[1595]: time="2026-03-12T01:37:50.795262276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:50.795917 containerd[1595]: time="2026-03-12T01:37:50.795838783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.123388556s" Mar 12 01:37:50.795917 containerd[1595]: time="2026-03-12T01:37:50.795886593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 12 01:37:50.809417 containerd[1595]: time="2026-03-12T01:37:50.809316667Z" level=info msg="CreateContainer within sandbox \"198cff193f7ba4b574cc8e606b26c25cf51d2bc808080fa2085bfd6981d84d1f\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 12 01:37:50.871144 containerd[1595]: time="2026-03-12T01:37:50.871014657Z" level=info msg="CreateContainer within sandbox \"198cff193f7ba4b574cc8e606b26c25cf51d2bc808080fa2085bfd6981d84d1f\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"7c5f8d68be33b754d852f843766baf64201f246e600efbf7d2081a82b87254fa\"" Mar 12 01:37:50.871779 containerd[1595]: time="2026-03-12T01:37:50.871745931Z" level=info msg="StartContainer for \"7c5f8d68be33b754d852f843766baf64201f246e600efbf7d2081a82b87254fa\"" Mar 12 01:37:50.997613 containerd[1595]: time="2026-03-12T01:37:50.997555608Z" level=info msg="StartContainer for \"7c5f8d68be33b754d852f843766baf64201f246e600efbf7d2081a82b87254fa\" returns successfully" Mar 12 01:37:51.164040 containerd[1595]: time="2026-03-12T01:37:51.163925232Z" level=info msg="shim disconnected" id=7c5f8d68be33b754d852f843766baf64201f246e600efbf7d2081a82b87254fa namespace=k8s.io Mar 12 01:37:51.164040 containerd[1595]: time="2026-03-12T01:37:51.164032911Z" level=warning msg="cleaning up after shim disconnected" id=7c5f8d68be33b754d852f843766baf64201f246e600efbf7d2081a82b87254fa namespace=k8s.io Mar 12 01:37:51.164040 containerd[1595]: time="2026-03-12T01:37:51.164044463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:37:51.365543 kubelet[2697]: E0312 01:37:51.365270 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-84r2m" podUID="beb8f589-ac04-487b-b459-4f523ba3d20d" Mar 12 01:37:51.623515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c5f8d68be33b754d852f843766baf64201f246e600efbf7d2081a82b87254fa-rootfs.mount: Deactivated successfully. Mar 12 01:37:51.692351 containerd[1595]: time="2026-03-12T01:37:51.692259981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 12 01:37:52.549071 kubelet[2697]: I0312 01:37:52.548295 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:37:52.549071 kubelet[2697]: E0312 01:37:52.548808 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:52.692599 kubelet[2697]: E0312 01:37:52.692254 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:53.366080 kubelet[2697]: E0312 01:37:53.365974 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-84r2m" podUID="beb8f589-ac04-487b-b459-4f523ba3d20d" Mar 12 01:37:53.459570 containerd[1595]: time="2026-03-12T01:37:53.459471473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:53.460480 containerd[1595]: time="2026-03-12T01:37:53.460436532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 12 01:37:53.462031 containerd[1595]: time="2026-03-12T01:37:53.461981323Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:53.466333 containerd[1595]: time="2026-03-12T01:37:53.466277361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:53.467203 containerd[1595]: time="2026-03-12T01:37:53.467157654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.774610711s" Mar 12 01:37:53.467203 containerd[1595]: time="2026-03-12T01:37:53.467198660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 12 01:37:53.472711 containerd[1595]: time="2026-03-12T01:37:53.472569542Z" level=info msg="CreateContainer within sandbox \"198cff193f7ba4b574cc8e606b26c25cf51d2bc808080fa2085bfd6981d84d1f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 12 01:37:53.490280 containerd[1595]: time="2026-03-12T01:37:53.490220629Z" level=info msg="CreateContainer within sandbox \"198cff193f7ba4b574cc8e606b26c25cf51d2bc808080fa2085bfd6981d84d1f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a23e402ebda431494f7868eb67eb53fef092952a4fe1bbcf6a03901e47d422e9\"" Mar 12 01:37:53.490940 containerd[1595]: time="2026-03-12T01:37:53.490880285Z" level=info msg="StartContainer for \"a23e402ebda431494f7868eb67eb53fef092952a4fe1bbcf6a03901e47d422e9\"" Mar 12 01:37:53.563252 containerd[1595]: time="2026-03-12T01:37:53.563155964Z" level=info msg="StartContainer for \"a23e402ebda431494f7868eb67eb53fef092952a4fe1bbcf6a03901e47d422e9\" returns successfully" Mar 12 01:37:54.249336 containerd[1595]: time="2026-03-12T01:37:54.249207966Z" level=info msg="shim disconnected" id=a23e402ebda431494f7868eb67eb53fef092952a4fe1bbcf6a03901e47d422e9 namespace=k8s.io Mar 12 01:37:54.249336 containerd[1595]: time="2026-03-12T01:37:54.249296790Z" level=warning msg="cleaning up after shim disconnected" id=a23e402ebda431494f7868eb67eb53fef092952a4fe1bbcf6a03901e47d422e9 namespace=k8s.io Mar 12 01:37:54.249336 containerd[1595]: time="2026-03-12T01:37:54.249312029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:37:54.284516 kubelet[2697]: I0312 01:37:54.284428 2697 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 12 01:37:54.354693 kubelet[2697]: I0312 01:37:54.353025 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0da1befa-e568-43f6-8333-a51d79629123-config-volume\") pod \"coredns-674b8bbfcf-qhnmx\" (UID: \"0da1befa-e568-43f6-8333-a51d79629123\") " pod="kube-system/coredns-674b8bbfcf-qhnmx" Mar 12 01:37:54.354693 kubelet[2697]: I0312 01:37:54.353082 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e561d6e9-adb9-4958-8e4a-34467004f252-config-volume\") pod \"coredns-674b8bbfcf-7hmdg\" (UID: \"e561d6e9-adb9-4958-8e4a-34467004f252\") " pod="kube-system/coredns-674b8bbfcf-7hmdg" Mar 12 01:37:54.354693 kubelet[2697]: I0312 01:37:54.353124 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12164ec-a3c1-4b91-bb08-d78e4edbc1ad-config\") pod \"goldmane-5b85766d88-cd8ml\" (UID: \"a12164ec-a3c1-4b91-bb08-d78e4edbc1ad\") " pod="calico-system/goldmane-5b85766d88-cd8ml" Mar 12 01:37:54.354693 kubelet[2697]: I0312 01:37:54.353266 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p2ps\" (UniqueName: \"kubernetes.io/projected/a12164ec-a3c1-4b91-bb08-d78e4edbc1ad-kube-api-access-7p2ps\") pod \"goldmane-5b85766d88-cd8ml\" (UID: \"a12164ec-a3c1-4b91-bb08-d78e4edbc1ad\") " pod="calico-system/goldmane-5b85766d88-cd8ml" Mar 12 01:37:54.354693 kubelet[2697]: I0312 01:37:54.353292 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fb257b0-27b4-4ccb-bab4-86fe3218bc99-tigera-ca-bundle\") pod \"calico-kube-controllers-6c984f8d9-nrkt8\" (UID: \"5fb257b0-27b4-4ccb-bab4-86fe3218bc99\") " pod="calico-system/calico-kube-controllers-6c984f8d9-nrkt8" Mar 12 01:37:54.355096 kubelet[2697]: I0312 01:37:54.353311 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfnsr\" (UniqueName: \"kubernetes.io/projected/0da1befa-e568-43f6-8333-a51d79629123-kube-api-access-mfnsr\") pod \"coredns-674b8bbfcf-qhnmx\" (UID: \"0da1befa-e568-43f6-8333-a51d79629123\") " pod="kube-system/coredns-674b8bbfcf-qhnmx" Mar 12 01:37:54.355096 kubelet[2697]: I0312 01:37:54.353325 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ttwh\" (UniqueName: \"kubernetes.io/projected/5fb257b0-27b4-4ccb-bab4-86fe3218bc99-kube-api-access-2ttwh\") pod \"calico-kube-controllers-6c984f8d9-nrkt8\" (UID: \"5fb257b0-27b4-4ccb-bab4-86fe3218bc99\") " pod="calico-system/calico-kube-controllers-6c984f8d9-nrkt8" Mar 12 01:37:54.355096 kubelet[2697]: I0312 01:37:54.353346 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzp4v\" (UniqueName: \"kubernetes.io/projected/e561d6e9-adb9-4958-8e4a-34467004f252-kube-api-access-pzp4v\") pod \"coredns-674b8bbfcf-7hmdg\" (UID: \"e561d6e9-adb9-4958-8e4a-34467004f252\") " pod="kube-system/coredns-674b8bbfcf-7hmdg" Mar 12 01:37:54.355096 kubelet[2697]: I0312 01:37:54.353365 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a12164ec-a3c1-4b91-bb08-d78e4edbc1ad-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-cd8ml\" (UID: \"a12164ec-a3c1-4b91-bb08-d78e4edbc1ad\") " pod="calico-system/goldmane-5b85766d88-cd8ml" Mar 12 01:37:54.355096 kubelet[2697]: I0312 01:37:54.353380 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a12164ec-a3c1-4b91-bb08-d78e4edbc1ad-goldmane-key-pair\") pod \"goldmane-5b85766d88-cd8ml\" (UID: \"a12164ec-a3c1-4b91-bb08-d78e4edbc1ad\") " pod="calico-system/goldmane-5b85766d88-cd8ml" Mar 12 01:37:54.454013 kubelet[2697]: I0312 01:37:54.453943 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j279\" (UniqueName: \"kubernetes.io/projected/7120930c-3a55-44b0-911f-6bef14f82bc4-kube-api-access-8j279\") pod \"calico-apiserver-d5bdf55d9-hgfnq\" (UID: \"7120930c-3a55-44b0-911f-6bef14f82bc4\") " pod="calico-system/calico-apiserver-d5bdf55d9-hgfnq" Mar 12 01:37:54.454013 kubelet[2697]: I0312 01:37:54.454021 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/754c305b-d35b-428a-a925-3c62be46c832-whisker-ca-bundle\") pod \"whisker-75b746db9f-2kjgw\" (UID: \"754c305b-d35b-428a-a925-3c62be46c832\") " pod="calico-system/whisker-75b746db9f-2kjgw" Mar 12 01:37:54.454213 kubelet[2697]: I0312 01:37:54.454059 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjmhb\" (UniqueName: \"kubernetes.io/projected/8827ea6d-6039-4f86-96be-28f12dc97ece-kube-api-access-fjmhb\") pod \"calico-apiserver-d5bdf55d9-zlq7p\" (UID: \"8827ea6d-6039-4f86-96be-28f12dc97ece\") " pod="calico-system/calico-apiserver-d5bdf55d9-zlq7p" Mar 12 01:37:54.454213 kubelet[2697]: I0312 01:37:54.454078 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/754c305b-d35b-428a-a925-3c62be46c832-whisker-backend-key-pair\") pod \"whisker-75b746db9f-2kjgw\" (UID: \"754c305b-d35b-428a-a925-3c62be46c832\") " pod="calico-system/whisker-75b746db9f-2kjgw" Mar 12 01:37:54.454213 kubelet[2697]: I0312 01:37:54.454091 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q899\" (UniqueName: \"kubernetes.io/projected/754c305b-d35b-428a-a925-3c62be46c832-kube-api-access-4q899\") pod \"whisker-75b746db9f-2kjgw\" (UID: \"754c305b-d35b-428a-a925-3c62be46c832\") " pod="calico-system/whisker-75b746db9f-2kjgw" Mar 12 01:37:54.454213 kubelet[2697]: I0312 01:37:54.454136 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8827ea6d-6039-4f86-96be-28f12dc97ece-calico-apiserver-certs\") pod \"calico-apiserver-d5bdf55d9-zlq7p\" (UID: \"8827ea6d-6039-4f86-96be-28f12dc97ece\") " pod="calico-system/calico-apiserver-d5bdf55d9-zlq7p" Mar 12 01:37:54.454574 kubelet[2697]: I0312 01:37:54.454411 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7120930c-3a55-44b0-911f-6bef14f82bc4-calico-apiserver-certs\") pod \"calico-apiserver-d5bdf55d9-hgfnq\" (UID: \"7120930c-3a55-44b0-911f-6bef14f82bc4\") " pod="calico-system/calico-apiserver-d5bdf55d9-hgfnq" Mar 12 01:37:54.454574 kubelet[2697]: I0312 01:37:54.454495 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/754c305b-d35b-428a-a925-3c62be46c832-nginx-config\") pod \"whisker-75b746db9f-2kjgw\" (UID: \"754c305b-d35b-428a-a925-3c62be46c832\") " pod="calico-system/whisker-75b746db9f-2kjgw" Mar 12 01:37:54.489853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a23e402ebda431494f7868eb67eb53fef092952a4fe1bbcf6a03901e47d422e9-rootfs.mount: Deactivated successfully. Mar 12 01:37:54.628594 kubelet[2697]: E0312 01:37:54.628424 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:54.629242 containerd[1595]: time="2026-03-12T01:37:54.629136467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7hmdg,Uid:e561d6e9-adb9-4958-8e4a-34467004f252,Namespace:kube-system,Attempt:0,}" Mar 12 01:37:54.655016 kubelet[2697]: E0312 01:37:54.654949 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:54.657617 containerd[1595]: time="2026-03-12T01:37:54.656841559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qhnmx,Uid:0da1befa-e568-43f6-8333-a51d79629123,Namespace:kube-system,Attempt:0,}" Mar 12 01:37:54.657617 containerd[1595]: time="2026-03-12T01:37:54.657190096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c984f8d9-nrkt8,Uid:5fb257b0-27b4-4ccb-bab4-86fe3218bc99,Namespace:calico-system,Attempt:0,}" Mar 12 01:37:54.660720 containerd[1595]: time="2026-03-12T01:37:54.660611006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-cd8ml,Uid:a12164ec-a3c1-4b91-bb08-d78e4edbc1ad,Namespace:calico-system,Attempt:0,}" Mar 12 01:37:54.686714 containerd[1595]: time="2026-03-12T01:37:54.686568971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75b746db9f-2kjgw,Uid:754c305b-d35b-428a-a925-3c62be46c832,Namespace:calico-system,Attempt:0,}" Mar 12 01:37:54.703456 containerd[1595]: time="2026-03-12T01:37:54.702970545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bdf55d9-hgfnq,Uid:7120930c-3a55-44b0-911f-6bef14f82bc4,Namespace:calico-system,Attempt:0,}" Mar 12 01:37:54.703456 containerd[1595]: time="2026-03-12T01:37:54.703026990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bdf55d9-zlq7p,Uid:8827ea6d-6039-4f86-96be-28f12dc97ece,Namespace:calico-system,Attempt:0,}" Mar 12 01:37:54.759947 containerd[1595]: time="2026-03-12T01:37:54.759828790Z" level=info msg="CreateContainer within sandbox \"198cff193f7ba4b574cc8e606b26c25cf51d2bc808080fa2085bfd6981d84d1f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 12 01:37:54.812086 containerd[1595]: time="2026-03-12T01:37:54.811983599Z" level=info msg="CreateContainer within sandbox \"198cff193f7ba4b574cc8e606b26c25cf51d2bc808080fa2085bfd6981d84d1f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f7eee00ca783e8f7a5216b6e203366a4dd694d153399e84c57b09e380b6f4c96\"" Mar 12 01:37:54.814609 containerd[1595]: time="2026-03-12T01:37:54.814528362Z" level=info msg="StartContainer for \"f7eee00ca783e8f7a5216b6e203366a4dd694d153399e84c57b09e380b6f4c96\"" Mar 12 01:37:54.930443 containerd[1595]: time="2026-03-12T01:37:54.928803738Z" level=error msg="Failed to destroy network for sandbox \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.930443 containerd[1595]: time="2026-03-12T01:37:54.929552397Z" level=error msg="encountered an error cleaning up failed sandbox \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.938857 containerd[1595]: time="2026-03-12T01:37:54.938818533Z" level=error msg="Failed to destroy network for sandbox \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.940516 containerd[1595]: time="2026-03-12T01:37:54.940493180Z" level=error msg="encountered an error cleaning up failed sandbox \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.963917 containerd[1595]: time="2026-03-12T01:37:54.963833257Z" level=info msg="StartContainer for \"f7eee00ca783e8f7a5216b6e203366a4dd694d153399e84c57b09e380b6f4c96\" returns successfully" Mar 12 01:37:54.964730 containerd[1595]: time="2026-03-12T01:37:54.964672394Z" level=error msg="Failed to destroy network for sandbox \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.965278 containerd[1595]: time="2026-03-12T01:37:54.965226863Z" level=error msg="encountered an error cleaning up failed sandbox \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.965323 containerd[1595]: time="2026-03-12T01:37:54.965290591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qhnmx,Uid:0da1befa-e568-43f6-8333-a51d79629123,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.965357 containerd[1595]: time="2026-03-12T01:37:54.965336406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7hmdg,Uid:e561d6e9-adb9-4958-8e4a-34467004f252,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.967990 containerd[1595]: time="2026-03-12T01:37:54.967934634Z" level=error msg="Failed to destroy network for sandbox \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.968687 containerd[1595]: time="2026-03-12T01:37:54.968456822Z" level=error msg="encountered an error cleaning up failed sandbox \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.968687 containerd[1595]: time="2026-03-12T01:37:54.968509911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75b746db9f-2kjgw,Uid:754c305b-d35b-428a-a925-3c62be46c832,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.976699 containerd[1595]: time="2026-03-12T01:37:54.975740157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c984f8d9-nrkt8,Uid:5fb257b0-27b4-4ccb-bab4-86fe3218bc99,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.976699 containerd[1595]: time="2026-03-12T01:37:54.975865299Z" level=error msg="Failed to destroy network for sandbox \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.976699 containerd[1595]: time="2026-03-12T01:37:54.976306207Z" level=error msg="encountered an error cleaning up failed sandbox \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.976699 containerd[1595]: time="2026-03-12T01:37:54.976337966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-cd8ml,Uid:a12164ec-a3c1-4b91-bb08-d78e4edbc1ad,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.978157 kubelet[2697]: E0312 01:37:54.978087 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.978281 kubelet[2697]: E0312 01:37:54.978263 2697 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7hmdg" Mar 12 01:37:54.978341 kubelet[2697]: E0312 01:37:54.978329 2697 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7hmdg" Mar 12 01:37:54.978437 kubelet[2697]: E0312 01:37:54.978415 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7hmdg_kube-system(e561d6e9-adb9-4958-8e4a-34467004f252)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7hmdg_kube-system(e561d6e9-adb9-4958-8e4a-34467004f252)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7hmdg" podUID="e561d6e9-adb9-4958-8e4a-34467004f252" Mar 12 01:37:54.978564 kubelet[2697]: E0312 01:37:54.978483 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.978729 kubelet[2697]: E0312 01:37:54.978713 2697 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-cd8ml" Mar 12 01:37:54.978852 kubelet[2697]: E0312 01:37:54.978835 2697 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-cd8ml" Mar 12 01:37:54.978997 kubelet[2697]: E0312 01:37:54.978978 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-cd8ml_calico-system(a12164ec-a3c1-4b91-bb08-d78e4edbc1ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-cd8ml_calico-system(a12164ec-a3c1-4b91-bb08-d78e4edbc1ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-cd8ml" podUID="a12164ec-a3c1-4b91-bb08-d78e4edbc1ad" Mar 12 01:37:54.979148 kubelet[2697]: E0312 01:37:54.978509 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.979271 kubelet[2697]: E0312 01:37:54.979255 2697 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75b746db9f-2kjgw" Mar 12 01:37:54.979359 kubelet[2697]: E0312 01:37:54.979343 2697 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75b746db9f-2kjgw" Mar 12 01:37:54.979492 kubelet[2697]: E0312 01:37:54.979474 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75b746db9f-2kjgw_calico-system(754c305b-d35b-428a-a925-3c62be46c832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75b746db9f-2kjgw_calico-system(754c305b-d35b-428a-a925-3c62be46c832)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75b746db9f-2kjgw" podUID="754c305b-d35b-428a-a925-3c62be46c832" Mar 12 01:37:54.979619 containerd[1595]: time="2026-03-12T01:37:54.979483370Z" level=error msg="Failed to destroy network for sandbox \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.979728 kubelet[2697]: E0312 01:37:54.978523 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.979863 kubelet[2697]: E0312 01:37:54.979846 2697 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c984f8d9-nrkt8" Mar 12 01:37:54.979998 kubelet[2697]: E0312 01:37:54.979981 2697 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c984f8d9-nrkt8" Mar 12 01:37:54.980312 kubelet[2697]: E0312 01:37:54.980128 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c984f8d9-nrkt8_calico-system(5fb257b0-27b4-4ccb-bab4-86fe3218bc99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c984f8d9-nrkt8_calico-system(5fb257b0-27b4-4ccb-bab4-86fe3218bc99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c984f8d9-nrkt8" podUID="5fb257b0-27b4-4ccb-bab4-86fe3218bc99" Mar 12 01:37:54.980312 kubelet[2697]: E0312 01:37:54.978663 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.980312 kubelet[2697]: E0312 01:37:54.980228 2697 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qhnmx" Mar 12 01:37:54.980440 kubelet[2697]: E0312 01:37:54.980240 2697 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qhnmx" Mar 12 01:37:54.980440 kubelet[2697]: E0312 01:37:54.980286 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qhnmx_kube-system(0da1befa-e568-43f6-8333-a51d79629123)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qhnmx_kube-system(0da1befa-e568-43f6-8333-a51d79629123)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qhnmx" podUID="0da1befa-e568-43f6-8333-a51d79629123" Mar 12 01:37:54.982112 containerd[1595]: time="2026-03-12T01:37:54.981163858Z" level=error msg="encountered an error cleaning up failed sandbox \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.982112 containerd[1595]: time="2026-03-12T01:37:54.981236092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bdf55d9-hgfnq,Uid:7120930c-3a55-44b0-911f-6bef14f82bc4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.982280 kubelet[2697]: E0312 01:37:54.981525 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:54.982280 kubelet[2697]: E0312 01:37:54.981554 2697 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d5bdf55d9-hgfnq" Mar 12 01:37:54.982280 kubelet[2697]: E0312 01:37:54.981569 2697 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d5bdf55d9-hgfnq" Mar 12 01:37:54.982401 kubelet[2697]: E0312 01:37:54.981663 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d5bdf55d9-hgfnq_calico-system(7120930c-3a55-44b0-911f-6bef14f82bc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d5bdf55d9-hgfnq_calico-system(7120930c-3a55-44b0-911f-6bef14f82bc4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-d5bdf55d9-hgfnq" podUID="7120930c-3a55-44b0-911f-6bef14f82bc4" Mar 12 01:37:55.022683 containerd[1595]: time="2026-03-12T01:37:55.022518384Z" level=error msg="Failed to destroy network for sandbox \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:55.023177 containerd[1595]: time="2026-03-12T01:37:55.023125909Z" level=error msg="encountered an error cleaning up failed sandbox \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:55.023177 containerd[1595]: time="2026-03-12T01:37:55.023170201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bdf55d9-zlq7p,Uid:8827ea6d-6039-4f86-96be-28f12dc97ece,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:55.023576 kubelet[2697]: E0312 01:37:55.023516 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:37:55.023709 kubelet[2697]: E0312 01:37:55.023594 2697 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d5bdf55d9-zlq7p" Mar 12 01:37:55.023709 kubelet[2697]: E0312 01:37:55.023615 2697 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-d5bdf55d9-zlq7p" Mar 12 01:37:55.023867 kubelet[2697]: E0312 01:37:55.023720 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d5bdf55d9-zlq7p_calico-system(8827ea6d-6039-4f86-96be-28f12dc97ece)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d5bdf55d9-zlq7p_calico-system(8827ea6d-6039-4f86-96be-28f12dc97ece)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-d5bdf55d9-zlq7p" podUID="8827ea6d-6039-4f86-96be-28f12dc97ece" Mar 12 01:37:55.371050 containerd[1595]: time="2026-03-12T01:37:55.370988884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84r2m,Uid:beb8f589-ac04-487b-b459-4f523ba3d20d,Namespace:calico-system,Attempt:0,}" Mar 12 01:37:55.564168 systemd-networkd[1251]: cali3aaad10fe2a: Link UP Mar 12 01:37:55.565910 systemd-networkd[1251]: cali3aaad10fe2a: Gained carrier Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.432 [ERROR][3875] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.461 [INFO][3875] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--84r2m-eth0 csi-node-driver- calico-system beb8f589-ac04-487b-b459-4f523ba3d20d 773 0 2026-03-12 01:37:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-84r2m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3aaad10fe2a [] [] }} ContainerID="6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" Namespace="calico-system" Pod="csi-node-driver-84r2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--84r2m-" Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.461 [INFO][3875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" Namespace="calico-system" Pod="csi-node-driver-84r2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--84r2m-eth0" Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.500 [INFO][3893] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" HandleID="k8s-pod-network.6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" Workload="localhost-k8s-csi--node--driver--84r2m-eth0" Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.507 [INFO][3893] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" HandleID="k8s-pod-network.6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" Workload="localhost-k8s-csi--node--driver--84r2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-84r2m", "timestamp":"2026-03-12 01:37:55.500554659 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002042c0)} Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.507 [INFO][3893] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.508 [INFO][3893] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.508 [INFO][3893] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.513 [INFO][3893] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" host="localhost" Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.518 [INFO][3893] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.524 [INFO][3893] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.526 [INFO][3893] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.529 [INFO][3893] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.529 [INFO][3893] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" host="localhost" Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.531 [INFO][3893] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4 Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.536 [INFO][3893] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" host="localhost" Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.543 [INFO][3893] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" host="localhost" Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.543 [INFO][3893] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" host="localhost" Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.543 [INFO][3893] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:37:55.580214 containerd[1595]: 2026-03-12 01:37:55.543 [INFO][3893] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" HandleID="k8s-pod-network.6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" Workload="localhost-k8s-csi--node--driver--84r2m-eth0" Mar 12 01:37:55.581231 containerd[1595]: 2026-03-12 01:37:55.549 [INFO][3875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" Namespace="calico-system" Pod="csi-node-driver-84r2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--84r2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--84r2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"beb8f589-ac04-487b-b459-4f523ba3d20d", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-84r2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3aaad10fe2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:55.581231 containerd[1595]: 2026-03-12 01:37:55.550 [INFO][3875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" Namespace="calico-system" Pod="csi-node-driver-84r2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--84r2m-eth0" Mar 12 01:37:55.581231 containerd[1595]: 2026-03-12 01:37:55.550 [INFO][3875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3aaad10fe2a ContainerID="6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" Namespace="calico-system" Pod="csi-node-driver-84r2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--84r2m-eth0" Mar 12 01:37:55.581231 containerd[1595]: 2026-03-12 01:37:55.566 [INFO][3875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" Namespace="calico-system" Pod="csi-node-driver-84r2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--84r2m-eth0" Mar 12 01:37:55.581231 containerd[1595]: 2026-03-12 01:37:55.566 [INFO][3875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" Namespace="calico-system" Pod="csi-node-driver-84r2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--84r2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--84r2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"beb8f589-ac04-487b-b459-4f523ba3d20d", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4", Pod:"csi-node-driver-84r2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3aaad10fe2a", MAC:"da:16:b4:4f:54:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:55.581231 containerd[1595]: 2026-03-12 01:37:55.576 [INFO][3875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4" Namespace="calico-system" Pod="csi-node-driver-84r2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--84r2m-eth0" Mar 12 01:37:55.607802 containerd[1595]: time="2026-03-12T01:37:55.607466924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:55.607802 containerd[1595]: time="2026-03-12T01:37:55.607510516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:55.607802 containerd[1595]: time="2026-03-12T01:37:55.607520304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:55.608060 containerd[1595]: time="2026-03-12T01:37:55.607793221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:55.656236 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:37:55.674461 containerd[1595]: time="2026-03-12T01:37:55.674374793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84r2m,Uid:beb8f589-ac04-487b-b459-4f523ba3d20d,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4\"" Mar 12 01:37:55.679847 containerd[1595]: time="2026-03-12T01:37:55.679743110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 12 01:37:55.728195 kubelet[2697]: I0312 01:37:55.727526 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Mar 12 01:37:55.731208 kubelet[2697]: I0312 01:37:55.731181 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Mar 12 01:37:55.733994 kubelet[2697]: I0312 01:37:55.733831 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Mar 12 01:37:55.735535 kubelet[2697]: I0312 01:37:55.735456 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Mar 12 01:37:55.741708 containerd[1595]: time="2026-03-12T01:37:55.741538549Z" level=info msg="StopPodSandbox for \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\"" Mar 12 01:37:55.742865 containerd[1595]: time="2026-03-12T01:37:55.742773174Z" level=info msg="StopPodSandbox for \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\"" Mar 12 01:37:55.743157 containerd[1595]: time="2026-03-12T01:37:55.743126771Z" level=info msg="StopPodSandbox for \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\"" Mar 12 01:37:55.743814 containerd[1595]: time="2026-03-12T01:37:55.743751820Z" level=info msg="StopPodSandbox for \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\"" Mar 12 01:37:55.744500 containerd[1595]: time="2026-03-12T01:37:55.744440316Z" level=info msg="Ensure that sandbox a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e in task-service has been cleanup successfully" Mar 12 01:37:55.744500 containerd[1595]: time="2026-03-12T01:37:55.744470445Z" level=info msg="Ensure that sandbox b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3 in task-service has been cleanup successfully" Mar 12 01:37:55.744594 containerd[1595]: time="2026-03-12T01:37:55.744533242Z" level=info msg="Ensure that sandbox 1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708 in task-service has been cleanup successfully" Mar 12 01:37:55.745359 containerd[1595]: time="2026-03-12T01:37:55.744474332Z" level=info msg="Ensure that sandbox 431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71 in task-service has been cleanup successfully" Mar 12 01:37:55.746532 kubelet[2697]: I0312 01:37:55.745707 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Mar 12 01:37:55.747983 containerd[1595]: time="2026-03-12T01:37:55.747854797Z" level=info msg="StopPodSandbox for \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\"" Mar 12 01:37:55.748529 containerd[1595]: time="2026-03-12T01:37:55.748083493Z" level=info msg="Ensure that sandbox 09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367 in task-service has been cleanup successfully" Mar 12 01:37:55.755536 kubelet[2697]: I0312 01:37:55.755446 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Mar 12 01:37:55.757239 containerd[1595]: time="2026-03-12T01:37:55.757214883Z" level=info msg="StopPodSandbox for \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\"" Mar 12 01:37:55.757582 containerd[1595]: time="2026-03-12T01:37:55.757565804Z" level=info msg="Ensure that sandbox 15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97 in task-service has been cleanup successfully" Mar 12 01:37:55.784769 kubelet[2697]: I0312 01:37:55.784707 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Mar 12 01:37:55.787296 containerd[1595]: time="2026-03-12T01:37:55.787087573Z" level=info msg="StopPodSandbox for \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\"" Mar 12 01:37:55.787296 containerd[1595]: time="2026-03-12T01:37:55.787239605Z" level=info msg="Ensure that sandbox 54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019 in task-service has been cleanup successfully" Mar 12 01:37:55.799342 kubelet[2697]: I0312 01:37:55.799276 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cpt86" podStartSLOduration=3.01907444 podStartE2EDuration="12.799256165s" podCreationTimestamp="2026-03-12 01:37:43 +0000 UTC" firstStartedPulling="2026-03-12 01:37:43.688070069 +0000 UTC m=+21.460534708" lastFinishedPulling="2026-03-12 01:37:53.468251794 +0000 UTC m=+31.240716433" observedRunningTime="2026-03-12 01:37:55.797140618 +0000 UTC m=+33.569605289" watchObservedRunningTime="2026-03-12 01:37:55.799256165 +0000 UTC m=+33.571720815" Mar 12 01:37:56.029372 containerd[1595]: 2026-03-12 01:37:55.879 [INFO][4022] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Mar 12 01:37:56.029372 containerd[1595]: 2026-03-12 01:37:55.883 [INFO][4022] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" iface="eth0" netns="/var/run/netns/cni-1d339dc9-49b6-763a-4194-e1005308bba4" Mar 12 01:37:56.029372 containerd[1595]: 2026-03-12 01:37:55.886 [INFO][4022] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" iface="eth0" netns="/var/run/netns/cni-1d339dc9-49b6-763a-4194-e1005308bba4" Mar 12 01:37:56.029372 containerd[1595]: 2026-03-12 01:37:55.889 [INFO][4022] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" iface="eth0" netns="/var/run/netns/cni-1d339dc9-49b6-763a-4194-e1005308bba4" Mar 12 01:37:56.029372 containerd[1595]: 2026-03-12 01:37:55.891 [INFO][4022] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Mar 12 01:37:56.029372 containerd[1595]: 2026-03-12 01:37:55.891 [INFO][4022] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Mar 12 01:37:56.029372 containerd[1595]: 2026-03-12 01:37:56.002 [INFO][4107] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" HandleID="k8s-pod-network.09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:37:56.029372 containerd[1595]: 2026-03-12 01:37:56.002 [INFO][4107] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:37:56.029372 containerd[1595]: 2026-03-12 01:37:56.002 [INFO][4107] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:37:56.029372 containerd[1595]: 2026-03-12 01:37:56.013 [WARNING][4107] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" HandleID="k8s-pod-network.09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:37:56.029372 containerd[1595]: 2026-03-12 01:37:56.013 [INFO][4107] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" HandleID="k8s-pod-network.09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:37:56.029372 containerd[1595]: 2026-03-12 01:37:56.016 [INFO][4107] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:37:56.029372 containerd[1595]: 2026-03-12 01:37:56.022 [INFO][4022] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Mar 12 01:37:56.031959 containerd[1595]: time="2026-03-12T01:37:56.031692082Z" level=info msg="TearDown network for sandbox \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\" successfully" Mar 12 01:37:56.031959 containerd[1595]: time="2026-03-12T01:37:56.031845045Z" level=info msg="StopPodSandbox for \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\" returns successfully" Mar 12 01:37:56.033560 containerd[1595]: time="2026-03-12T01:37:56.033468580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bdf55d9-zlq7p,Uid:8827ea6d-6039-4f86-96be-28f12dc97ece,Namespace:calico-system,Attempt:1,}" Mar 12 01:37:56.060219 containerd[1595]: 2026-03-12 01:37:55.910 [INFO][4023] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Mar 12 01:37:56.060219 containerd[1595]: 2026-03-12 01:37:55.913 [INFO][4023] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" iface="eth0" netns="/var/run/netns/cni-2ca6dfc7-25a3-e91d-a7c7-b3b1c3fc0851" Mar 12 01:37:56.060219 containerd[1595]: 2026-03-12 01:37:55.913 [INFO][4023] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" iface="eth0" netns="/var/run/netns/cni-2ca6dfc7-25a3-e91d-a7c7-b3b1c3fc0851" Mar 12 01:37:56.060219 containerd[1595]: 2026-03-12 01:37:55.932 [INFO][4023] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" iface="eth0" netns="/var/run/netns/cni-2ca6dfc7-25a3-e91d-a7c7-b3b1c3fc0851" Mar 12 01:37:56.060219 containerd[1595]: 2026-03-12 01:37:55.932 [INFO][4023] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Mar 12 01:37:56.060219 containerd[1595]: 2026-03-12 01:37:55.932 [INFO][4023] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Mar 12 01:37:56.060219 containerd[1595]: 2026-03-12 01:37:56.009 [INFO][4123] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" HandleID="k8s-pod-network.a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:37:56.060219 containerd[1595]: 2026-03-12 01:37:56.009 [INFO][4123] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:37:56.060219 containerd[1595]: 2026-03-12 01:37:56.018 [INFO][4123] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:37:56.060219 containerd[1595]: 2026-03-12 01:37:56.028 [WARNING][4123] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" HandleID="k8s-pod-network.a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:37:56.060219 containerd[1595]: 2026-03-12 01:37:56.028 [INFO][4123] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" HandleID="k8s-pod-network.a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:37:56.060219 containerd[1595]: 2026-03-12 01:37:56.033 [INFO][4123] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:37:56.060219 containerd[1595]: 2026-03-12 01:37:56.040 [INFO][4023] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Mar 12 01:37:56.062068 containerd[1595]: time="2026-03-12T01:37:56.061928335Z" level=info msg="TearDown network for sandbox \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\" successfully" Mar 12 01:37:56.062068 containerd[1595]: time="2026-03-12T01:37:56.061968470Z" level=info msg="StopPodSandbox for \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\" returns successfully" Mar 12 01:37:56.064132 containerd[1595]: time="2026-03-12T01:37:56.063797165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bdf55d9-hgfnq,Uid:7120930c-3a55-44b0-911f-6bef14f82bc4,Namespace:calico-system,Attempt:1,}" Mar 12 01:37:56.086367 containerd[1595]: 2026-03-12 01:37:55.969 [INFO][4075] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Mar 12 01:37:56.086367 containerd[1595]: 2026-03-12 01:37:55.972 [INFO][4075] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" iface="eth0" netns="/var/run/netns/cni-e7eebad9-b11c-8ac2-2682-fb1bf9eb5830" Mar 12 01:37:56.086367 containerd[1595]: 2026-03-12 01:37:55.972 [INFO][4075] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" iface="eth0" netns="/var/run/netns/cni-e7eebad9-b11c-8ac2-2682-fb1bf9eb5830" Mar 12 01:37:56.086367 containerd[1595]: 2026-03-12 01:37:55.973 [INFO][4075] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" iface="eth0" netns="/var/run/netns/cni-e7eebad9-b11c-8ac2-2682-fb1bf9eb5830" Mar 12 01:37:56.086367 containerd[1595]: 2026-03-12 01:37:55.973 [INFO][4075] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Mar 12 01:37:56.086367 containerd[1595]: 2026-03-12 01:37:55.973 [INFO][4075] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Mar 12 01:37:56.086367 containerd[1595]: 2026-03-12 01:37:56.047 [INFO][4142] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" HandleID="k8s-pod-network.54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Workload="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:37:56.086367 containerd[1595]: 2026-03-12 01:37:56.047 [INFO][4142] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:37:56.086367 containerd[1595]: 2026-03-12 01:37:56.048 [INFO][4142] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:37:56.086367 containerd[1595]: 2026-03-12 01:37:56.055 [WARNING][4142] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" HandleID="k8s-pod-network.54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Workload="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:37:56.086367 containerd[1595]: 2026-03-12 01:37:56.055 [INFO][4142] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" HandleID="k8s-pod-network.54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Workload="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:37:56.086367 containerd[1595]: 2026-03-12 01:37:56.057 [INFO][4142] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:37:56.086367 containerd[1595]: 2026-03-12 01:37:56.066 [INFO][4075] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Mar 12 01:37:56.087272 containerd[1595]: time="2026-03-12T01:37:56.086957128Z" level=info msg="TearDown network for sandbox \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\" successfully" Mar 12 01:37:56.087272 containerd[1595]: time="2026-03-12T01:37:56.086987636Z" level=info msg="StopPodSandbox for \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\" returns successfully" Mar 12 01:37:56.087930 kubelet[2697]: E0312 01:37:56.087690 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:56.091401 containerd[1595]: time="2026-03-12T01:37:56.091051032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7hmdg,Uid:e561d6e9-adb9-4958-8e4a-34467004f252,Namespace:kube-system,Attempt:1,}" Mar 12 01:37:56.111927 containerd[1595]: 2026-03-12 01:37:55.898 [INFO][4041] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Mar 12 01:37:56.111927 containerd[1595]: 2026-03-12 01:37:55.900 [INFO][4041] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" iface="eth0" netns="/var/run/netns/cni-09c67b79-3315-1628-8b7b-6c237bd6801b" Mar 12 01:37:56.111927 containerd[1595]: 2026-03-12 01:37:55.900 [INFO][4041] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" iface="eth0" netns="/var/run/netns/cni-09c67b79-3315-1628-8b7b-6c237bd6801b" Mar 12 01:37:56.111927 containerd[1595]: 2026-03-12 01:37:55.901 [INFO][4041] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" iface="eth0" netns="/var/run/netns/cni-09c67b79-3315-1628-8b7b-6c237bd6801b" Mar 12 01:37:56.111927 containerd[1595]: 2026-03-12 01:37:55.901 [INFO][4041] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Mar 12 01:37:56.111927 containerd[1595]: 2026-03-12 01:37:55.901 [INFO][4041] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Mar 12 01:37:56.111927 containerd[1595]: 2026-03-12 01:37:56.064 [INFO][4113] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" HandleID="k8s-pod-network.1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Workload="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:37:56.111927 containerd[1595]: 2026-03-12 01:37:56.065 [INFO][4113] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:37:56.111927 containerd[1595]: 2026-03-12 01:37:56.065 [INFO][4113] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:37:56.111927 containerd[1595]: 2026-03-12 01:37:56.077 [WARNING][4113] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" HandleID="k8s-pod-network.1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Workload="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:37:56.111927 containerd[1595]: 2026-03-12 01:37:56.077 [INFO][4113] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" HandleID="k8s-pod-network.1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Workload="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:37:56.111927 containerd[1595]: 2026-03-12 01:37:56.080 [INFO][4113] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:37:56.111927 containerd[1595]: 2026-03-12 01:37:56.092 [INFO][4041] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Mar 12 01:37:56.112977 containerd[1595]: time="2026-03-12T01:37:56.112228241Z" level=info msg="TearDown network for sandbox \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\" successfully" Mar 12 01:37:56.112977 containerd[1595]: time="2026-03-12T01:37:56.112262826Z" level=info msg="StopPodSandbox for \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\" returns successfully" Mar 12 01:37:56.113592 kubelet[2697]: E0312 01:37:56.112853 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:56.117543 containerd[1595]: time="2026-03-12T01:37:56.116952611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qhnmx,Uid:0da1befa-e568-43f6-8333-a51d79629123,Namespace:kube-system,Attempt:1,}" Mar 12 01:37:56.119718 containerd[1595]: 2026-03-12 01:37:56.015 [INFO][4047] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Mar 12 01:37:56.119718 containerd[1595]: 2026-03-12 01:37:56.016 [INFO][4047] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" iface="eth0" netns="/var/run/netns/cni-8ef30bc9-af4c-a54c-395b-c0d19a440bbf" Mar 12 01:37:56.119718 containerd[1595]: 2026-03-12 01:37:56.020 [INFO][4047] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" iface="eth0" netns="/var/run/netns/cni-8ef30bc9-af4c-a54c-395b-c0d19a440bbf" Mar 12 01:37:56.119718 containerd[1595]: 2026-03-12 01:37:56.020 [INFO][4047] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" iface="eth0" netns="/var/run/netns/cni-8ef30bc9-af4c-a54c-395b-c0d19a440bbf" Mar 12 01:37:56.119718 containerd[1595]: 2026-03-12 01:37:56.020 [INFO][4047] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Mar 12 01:37:56.119718 containerd[1595]: 2026-03-12 01:37:56.020 [INFO][4047] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Mar 12 01:37:56.119718 containerd[1595]: 2026-03-12 01:37:56.085 [INFO][4152] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" HandleID="k8s-pod-network.15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Workload="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:37:56.119718 containerd[1595]: 2026-03-12 01:37:56.085 [INFO][4152] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:37:56.119718 containerd[1595]: 2026-03-12 01:37:56.085 [INFO][4152] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:37:56.119718 containerd[1595]: 2026-03-12 01:37:56.097 [WARNING][4152] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" HandleID="k8s-pod-network.15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Workload="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:37:56.119718 containerd[1595]: 2026-03-12 01:37:56.097 [INFO][4152] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" HandleID="k8s-pod-network.15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Workload="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:37:56.119718 containerd[1595]: 2026-03-12 01:37:56.101 [INFO][4152] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:37:56.119718 containerd[1595]: 2026-03-12 01:37:56.113 [INFO][4047] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Mar 12 01:37:56.121745 containerd[1595]: time="2026-03-12T01:37:56.121699495Z" level=info msg="TearDown network for sandbox \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\" successfully" Mar 12 01:37:56.121745 containerd[1595]: time="2026-03-12T01:37:56.121741614Z" level=info msg="StopPodSandbox for \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\" returns successfully" Mar 12 01:37:56.135025 containerd[1595]: 2026-03-12 01:37:55.920 [INFO][4021] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Mar 12 01:37:56.135025 containerd[1595]: 2026-03-12 01:37:55.923 [INFO][4021] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" iface="eth0" netns="/var/run/netns/cni-02396ebc-7b13-b400-d13b-9d1a8b2a2d88" Mar 12 01:37:56.135025 containerd[1595]: 2026-03-12 01:37:55.923 [INFO][4021] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" iface="eth0" netns="/var/run/netns/cni-02396ebc-7b13-b400-d13b-9d1a8b2a2d88" Mar 12 01:37:56.135025 containerd[1595]: 2026-03-12 01:37:55.924 [INFO][4021] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" iface="eth0" netns="/var/run/netns/cni-02396ebc-7b13-b400-d13b-9d1a8b2a2d88" Mar 12 01:37:56.135025 containerd[1595]: 2026-03-12 01:37:55.924 [INFO][4021] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Mar 12 01:37:56.135025 containerd[1595]: 2026-03-12 01:37:55.924 [INFO][4021] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Mar 12 01:37:56.135025 containerd[1595]: 2026-03-12 01:37:56.088 [INFO][4120] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" HandleID="k8s-pod-network.431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Workload="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:37:56.135025 containerd[1595]: 2026-03-12 01:37:56.089 [INFO][4120] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:37:56.135025 containerd[1595]: 2026-03-12 01:37:56.103 [INFO][4120] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:37:56.135025 containerd[1595]: 2026-03-12 01:37:56.117 [WARNING][4120] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" HandleID="k8s-pod-network.431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Workload="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:37:56.135025 containerd[1595]: 2026-03-12 01:37:56.117 [INFO][4120] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" HandleID="k8s-pod-network.431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Workload="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:37:56.135025 containerd[1595]: 2026-03-12 01:37:56.120 [INFO][4120] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:37:56.135025 containerd[1595]: 2026-03-12 01:37:56.127 [INFO][4021] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Mar 12 01:37:56.136673 containerd[1595]: time="2026-03-12T01:37:56.136295971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c984f8d9-nrkt8,Uid:5fb257b0-27b4-4ccb-bab4-86fe3218bc99,Namespace:calico-system,Attempt:1,}" Mar 12 01:37:56.138294 containerd[1595]: time="2026-03-12T01:37:56.136780676Z" level=info msg="TearDown network for sandbox \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\" successfully" Mar 12 01:37:56.138294 containerd[1595]: time="2026-03-12T01:37:56.136907922Z" level=info msg="StopPodSandbox for \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\" returns successfully" Mar 12 01:37:56.138294 containerd[1595]: time="2026-03-12T01:37:56.137534311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-cd8ml,Uid:a12164ec-a3c1-4b91-bb08-d78e4edbc1ad,Namespace:calico-system,Attempt:1,}" Mar 12 01:37:56.167215 containerd[1595]: 2026-03-12 01:37:55.941 [INFO][4015] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Mar 12 01:37:56.167215 containerd[1595]: 2026-03-12 01:37:55.942 [INFO][4015] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" iface="eth0" netns="/var/run/netns/cni-9052415c-1aed-a54f-3336-71022b9e168a" Mar 12 01:37:56.167215 containerd[1595]: 2026-03-12 01:37:55.942 [INFO][4015] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" iface="eth0" netns="/var/run/netns/cni-9052415c-1aed-a54f-3336-71022b9e168a" Mar 12 01:37:56.167215 containerd[1595]: 2026-03-12 01:37:55.942 [INFO][4015] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" iface="eth0" netns="/var/run/netns/cni-9052415c-1aed-a54f-3336-71022b9e168a" Mar 12 01:37:56.167215 containerd[1595]: 2026-03-12 01:37:55.942 [INFO][4015] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Mar 12 01:37:56.167215 containerd[1595]: 2026-03-12 01:37:55.942 [INFO][4015] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Mar 12 01:37:56.167215 containerd[1595]: 2026-03-12 01:37:56.138 [INFO][4125] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" HandleID="k8s-pod-network.b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Workload="localhost-k8s-whisker--75b746db9f--2kjgw-eth0" Mar 12 01:37:56.167215 containerd[1595]: 2026-03-12 01:37:56.138 [INFO][4125] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:37:56.167215 containerd[1595]: 2026-03-12 01:37:56.138 [INFO][4125] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:37:56.167215 containerd[1595]: 2026-03-12 01:37:56.152 [WARNING][4125] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" HandleID="k8s-pod-network.b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Workload="localhost-k8s-whisker--75b746db9f--2kjgw-eth0" Mar 12 01:37:56.167215 containerd[1595]: 2026-03-12 01:37:56.152 [INFO][4125] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" HandleID="k8s-pod-network.b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Workload="localhost-k8s-whisker--75b746db9f--2kjgw-eth0" Mar 12 01:37:56.167215 containerd[1595]: 2026-03-12 01:37:56.156 [INFO][4125] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:37:56.167215 containerd[1595]: 2026-03-12 01:37:56.159 [INFO][4015] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Mar 12 01:37:56.168746 containerd[1595]: time="2026-03-12T01:37:56.168714317Z" level=info msg="TearDown network for sandbox \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\" successfully" Mar 12 01:37:56.168851 containerd[1595]: time="2026-03-12T01:37:56.168828058Z" level=info msg="StopPodSandbox for \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\" returns successfully" Mar 12 01:37:56.278456 kubelet[2697]: I0312 01:37:56.278358 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/754c305b-d35b-428a-a925-3c62be46c832-nginx-config\") pod \"754c305b-d35b-428a-a925-3c62be46c832\" (UID: \"754c305b-d35b-428a-a925-3c62be46c832\") " Mar 12 01:37:56.278716 kubelet[2697]: I0312 01:37:56.278499 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q899\" (UniqueName: \"kubernetes.io/projected/754c305b-d35b-428a-a925-3c62be46c832-kube-api-access-4q899\") pod \"754c305b-d35b-428a-a925-3c62be46c832\" (UID: \"754c305b-d35b-428a-a925-3c62be46c832\") " Mar 12 01:37:56.278716 kubelet[2697]: I0312 01:37:56.278534 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/754c305b-d35b-428a-a925-3c62be46c832-whisker-backend-key-pair\") pod \"754c305b-d35b-428a-a925-3c62be46c832\" (UID: \"754c305b-d35b-428a-a925-3c62be46c832\") " Mar 12 01:37:56.278716 kubelet[2697]: I0312 01:37:56.278560 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/754c305b-d35b-428a-a925-3c62be46c832-whisker-ca-bundle\") pod \"754c305b-d35b-428a-a925-3c62be46c832\" (UID: \"754c305b-d35b-428a-a925-3c62be46c832\") " Mar 12 01:37:56.282171 kubelet[2697]: I0312 01:37:56.281073 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/754c305b-d35b-428a-a925-3c62be46c832-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "754c305b-d35b-428a-a925-3c62be46c832" (UID: "754c305b-d35b-428a-a925-3c62be46c832"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:37:56.282171 kubelet[2697]: I0312 01:37:56.282099 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/754c305b-d35b-428a-a925-3c62be46c832-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "754c305b-d35b-428a-a925-3c62be46c832" (UID: "754c305b-d35b-428a-a925-3c62be46c832"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:37:56.290552 kubelet[2697]: I0312 01:37:56.288453 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/754c305b-d35b-428a-a925-3c62be46c832-kube-api-access-4q899" (OuterVolumeSpecName: "kube-api-access-4q899") pod "754c305b-d35b-428a-a925-3c62be46c832" (UID: "754c305b-d35b-428a-a925-3c62be46c832"). InnerVolumeSpecName "kube-api-access-4q899". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 01:37:56.290552 kubelet[2697]: I0312 01:37:56.288686 2697 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754c305b-d35b-428a-a925-3c62be46c832-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "754c305b-d35b-428a-a925-3c62be46c832" (UID: "754c305b-d35b-428a-a925-3c62be46c832"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 01:37:56.370108 systemd-networkd[1251]: calib3caf480b1a: Link UP Mar 12 01:37:56.380924 kubelet[2697]: I0312 01:37:56.379569 2697 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/754c305b-d35b-428a-a925-3c62be46c832-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 12 01:37:56.380924 kubelet[2697]: I0312 01:37:56.380359 2697 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/754c305b-d35b-428a-a925-3c62be46c832-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 12 01:37:56.380924 kubelet[2697]: I0312 01:37:56.380379 2697 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/754c305b-d35b-428a-a925-3c62be46c832-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 12 01:37:56.380924 kubelet[2697]: I0312 01:37:56.380438 2697 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4q899\" (UniqueName: \"kubernetes.io/projected/754c305b-d35b-428a-a925-3c62be46c832-kube-api-access-4q899\") on node \"localhost\" DevicePath \"\"" Mar 12 01:37:56.385491 systemd-networkd[1251]: calib3caf480b1a: Gained carrier Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.146 [ERROR][4184] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.169 [INFO][4184] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0 calico-apiserver-d5bdf55d9- calico-system 7120930c-3a55-44b0-911f-6bef14f82bc4 962 0 2026-03-12 01:37:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d5bdf55d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d5bdf55d9-hgfnq eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib3caf480b1a [] [] }} ContainerID="03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-hgfnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-" Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.169 [INFO][4184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-hgfnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.261 [INFO][4207] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" HandleID="k8s-pod-network.03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.274 [INFO][4207] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" HandleID="k8s-pod-network.03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037dbb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-d5bdf55d9-hgfnq", "timestamp":"2026-03-12 01:37:56.261724951 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00015e420)} Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.274 [INFO][4207] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.274 [INFO][4207] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.274 [INFO][4207] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.285 [INFO][4207] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" host="localhost" Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.295 [INFO][4207] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.309 [INFO][4207] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.312 [INFO][4207] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.324 [INFO][4207] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.324 [INFO][4207] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" host="localhost" Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.338 [INFO][4207] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.345 [INFO][4207] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" host="localhost" Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.354 [INFO][4207] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" host="localhost" Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.354 [INFO][4207] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" host="localhost" Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.354 [INFO][4207] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:37:56.406462 containerd[1595]: 2026-03-12 01:37:56.354 [INFO][4207] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" HandleID="k8s-pod-network.03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:37:56.408380 containerd[1595]: 2026-03-12 01:37:56.361 [INFO][4184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-hgfnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0", GenerateName:"calico-apiserver-d5bdf55d9-", Namespace:"calico-system", SelfLink:"", UID:"7120930c-3a55-44b0-911f-6bef14f82bc4", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bdf55d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d5bdf55d9-hgfnq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib3caf480b1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:56.408380 containerd[1595]: 2026-03-12 01:37:56.361 [INFO][4184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-hgfnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:37:56.408380 containerd[1595]: 2026-03-12 01:37:56.361 [INFO][4184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3caf480b1a ContainerID="03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-hgfnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:37:56.408380 containerd[1595]: 2026-03-12 01:37:56.384 [INFO][4184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-hgfnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:37:56.408380 containerd[1595]: 2026-03-12 01:37:56.385 [INFO][4184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-hgfnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0", GenerateName:"calico-apiserver-d5bdf55d9-", Namespace:"calico-system", SelfLink:"", UID:"7120930c-3a55-44b0-911f-6bef14f82bc4", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bdf55d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb", Pod:"calico-apiserver-d5bdf55d9-hgfnq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib3caf480b1a", MAC:"42:ef:65:04:3a:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:56.408380 containerd[1595]: 2026-03-12 01:37:56.401 [INFO][4184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-hgfnq" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:37:56.455795 containerd[1595]: time="2026-03-12T01:37:56.455485797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:56.455795 containerd[1595]: time="2026-03-12T01:37:56.455538716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:56.455795 containerd[1595]: time="2026-03-12T01:37:56.455560486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:56.456177 containerd[1595]: time="2026-03-12T01:37:56.456004421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:56.457854 systemd-networkd[1251]: cali0a03cf32e34: Link UP Mar 12 01:37:56.459220 systemd-networkd[1251]: cali0a03cf32e34: Gained carrier Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.174 [ERROR][4164] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.195 [INFO][4164] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0 calico-apiserver-d5bdf55d9- calico-system 8827ea6d-6039-4f86-96be-28f12dc97ece 959 0 2026-03-12 01:37:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d5bdf55d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d5bdf55d9-zlq7p eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali0a03cf32e34 [] [] }} ContainerID="f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-zlq7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-" Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.195 [INFO][4164] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-zlq7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.252 [INFO][4215] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" HandleID="k8s-pod-network.f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.276 [INFO][4215] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" HandleID="k8s-pod-network.f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-d5bdf55d9-zlq7p", "timestamp":"2026-03-12 01:37:56.252390347 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ac000)} Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.276 [INFO][4215] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.355 [INFO][4215] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.355 [INFO][4215] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.379 [INFO][4215] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" host="localhost" Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.395 [INFO][4215] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.410 [INFO][4215] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.413 [INFO][4215] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.417 [INFO][4215] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.417 [INFO][4215] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" host="localhost" Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.420 [INFO][4215] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.427 [INFO][4215] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" host="localhost" Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.436 [INFO][4215] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" host="localhost" Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.436 [INFO][4215] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" host="localhost" Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.436 [INFO][4215] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:37:56.482326 containerd[1595]: 2026-03-12 01:37:56.436 [INFO][4215] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" HandleID="k8s-pod-network.f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:37:56.483026 containerd[1595]: 2026-03-12 01:37:56.445 [INFO][4164] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-zlq7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0", GenerateName:"calico-apiserver-d5bdf55d9-", Namespace:"calico-system", SelfLink:"", UID:"8827ea6d-6039-4f86-96be-28f12dc97ece", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bdf55d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d5bdf55d9-zlq7p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0a03cf32e34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:56.483026 containerd[1595]: 2026-03-12 01:37:56.445 [INFO][4164] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-zlq7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:37:56.483026 containerd[1595]: 2026-03-12 01:37:56.446 [INFO][4164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a03cf32e34 ContainerID="f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-zlq7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:37:56.483026 containerd[1595]: 2026-03-12 01:37:56.460 [INFO][4164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-zlq7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:37:56.483026 containerd[1595]: 2026-03-12 01:37:56.460 [INFO][4164] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-zlq7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0", GenerateName:"calico-apiserver-d5bdf55d9-", Namespace:"calico-system", SelfLink:"", UID:"8827ea6d-6039-4f86-96be-28f12dc97ece", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bdf55d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc", Pod:"calico-apiserver-d5bdf55d9-zlq7p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0a03cf32e34", MAC:"46:da:99:f8:a8:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:56.483026 containerd[1595]: 2026-03-12 01:37:56.473 [INFO][4164] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc" Namespace="calico-system" Pod="calico-apiserver-d5bdf55d9-zlq7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:37:56.507076 systemd[1]: run-netns-cni\x2d2ca6dfc7\x2d25a3\x2de91d\x2da7c7\x2db3b1c3fc0851.mount: Deactivated successfully. Mar 12 01:37:56.507380 systemd[1]: run-netns-cni\x2d1d339dc9\x2d49b6\x2d763a\x2d4194\x2de1005308bba4.mount: Deactivated successfully. Mar 12 01:37:56.507591 systemd[1]: run-netns-cni\x2d9052415c\x2d1aed\x2da54f\x2d3336\x2d71022b9e168a.mount: Deactivated successfully. Mar 12 01:37:56.512728 systemd[1]: run-netns-cni\x2d02396ebc\x2d7b13\x2db400\x2dd13b\x2d9d1a8b2a2d88.mount: Deactivated successfully. Mar 12 01:37:56.512978 systemd[1]: run-netns-cni\x2d8ef30bc9\x2daf4c\x2da54c\x2d395b\x2dc0d19a440bbf.mount: Deactivated successfully. Mar 12 01:37:56.513182 systemd[1]: run-netns-cni\x2d09c67b79\x2d3315\x2d1628\x2d8b7b\x2d6c237bd6801b.mount: Deactivated successfully. Mar 12 01:37:56.513381 systemd[1]: run-netns-cni\x2de7eebad9\x2db11c\x2d8ac2\x2d2682\x2dfb1bf9eb5830.mount: Deactivated successfully. Mar 12 01:37:56.513589 systemd[1]: var-lib-kubelet-pods-754c305b\x2dd35b\x2d428a\x2da925\x2d3c62be46c832-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4q899.mount: Deactivated successfully. Mar 12 01:37:56.513963 systemd[1]: var-lib-kubelet-pods-754c305b\x2dd35b\x2d428a\x2da925\x2d3c62be46c832-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 12 01:37:56.541439 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:37:56.591287 containerd[1595]: time="2026-03-12T01:37:56.591110428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:56.593230 containerd[1595]: time="2026-03-12T01:37:56.593106615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:56.593504 containerd[1595]: time="2026-03-12T01:37:56.593211120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:56.593723 containerd[1595]: time="2026-03-12T01:37:56.593693244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:56.631767 systemd-networkd[1251]: cali3aaad10fe2a: Gained IPv6LL Mar 12 01:37:56.650472 systemd-networkd[1251]: calidf97804c3ce: Link UP Mar 12 01:37:56.651584 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:37:56.661936 systemd-networkd[1251]: calidf97804c3ce: Gained carrier Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.206 [ERROR][4193] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.224 [INFO][4193] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0 coredns-674b8bbfcf- kube-system e561d6e9-adb9-4958-8e4a-34467004f252 965 0 2026-03-12 01:37:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-7hmdg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidf97804c3ce [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hmdg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hmdg-" Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.224 [INFO][4193] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hmdg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.342 [INFO][4245] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" HandleID="k8s-pod-network.0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" Workload="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.356 [INFO][4245] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" HandleID="k8s-pod-network.0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" Workload="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00044cfe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-7hmdg", "timestamp":"2026-03-12 01:37:56.34218416 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000d5340)} Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.356 [INFO][4245] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.439 [INFO][4245] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.439 [INFO][4245] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.480 [INFO][4245] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" host="localhost" Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.496 [INFO][4245] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.524 [INFO][4245] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.532 [INFO][4245] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.539 [INFO][4245] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.539 [INFO][4245] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" host="localhost" Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.562 [INFO][4245] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.571 [INFO][4245] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" host="localhost" Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.585 [INFO][4245] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" host="localhost" Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.585 [INFO][4245] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" host="localhost" Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.585 [INFO][4245] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:37:56.697161 containerd[1595]: 2026-03-12 01:37:56.585 [INFO][4245] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" HandleID="k8s-pod-network.0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" Workload="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:37:56.701294 containerd[1595]: 2026-03-12 01:37:56.615 [INFO][4193] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hmdg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e561d6e9-adb9-4958-8e4a-34467004f252", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-7hmdg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf97804c3ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:56.701294 containerd[1595]: 2026-03-12 01:37:56.616 [INFO][4193] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hmdg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:37:56.701294 containerd[1595]: 2026-03-12 01:37:56.620 [INFO][4193] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf97804c3ce ContainerID="0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hmdg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:37:56.701294 containerd[1595]: 2026-03-12 01:37:56.665 [INFO][4193] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hmdg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:37:56.701294 containerd[1595]: 2026-03-12 01:37:56.674 [INFO][4193] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hmdg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e561d6e9-adb9-4958-8e4a-34467004f252", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa", Pod:"coredns-674b8bbfcf-7hmdg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf97804c3ce", MAC:"62:47:73:70:6c:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:56.701294 containerd[1595]: 2026-03-12 01:37:56.687 [INFO][4193] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa" Namespace="kube-system" Pod="coredns-674b8bbfcf-7hmdg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:37:56.708422 systemd-networkd[1251]: cali410349c6c43: Link UP Mar 12 01:37:56.714017 systemd-networkd[1251]: cali410349c6c43: Gained carrier Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.279 [ERROR][4228] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.313 [INFO][4228] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0 calico-kube-controllers-6c984f8d9- calico-system 5fb257b0-27b4-4ccb-bab4-86fe3218bc99 966 0 2026-03-12 01:37:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c984f8d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6c984f8d9-nrkt8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali410349c6c43 [] [] }} ContainerID="e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" Namespace="calico-system" Pod="calico-kube-controllers-6c984f8d9-nrkt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-" Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.313 [INFO][4228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" Namespace="calico-system" Pod="calico-kube-controllers-6c984f8d9-nrkt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.373 [INFO][4282] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" HandleID="k8s-pod-network.e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" Workload="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.400 [INFO][4282] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" HandleID="k8s-pod-network.e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" Workload="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f620), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6c984f8d9-nrkt8", "timestamp":"2026-03-12 01:37:56.373333066 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fc2c0)} Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.400 [INFO][4282] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.589 [INFO][4282] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.591 [INFO][4282] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.612 [INFO][4282] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" host="localhost" Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.652 [INFO][4282] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.660 [INFO][4282] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.663 [INFO][4282] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.666 [INFO][4282] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.666 [INFO][4282] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" host="localhost" Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.670 [INFO][4282] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407 Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.675 [INFO][4282] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" host="localhost" Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.691 [INFO][4282] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" host="localhost" Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.691 [INFO][4282] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" host="localhost" Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.691 [INFO][4282] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:37:56.766083 containerd[1595]: 2026-03-12 01:37:56.691 [INFO][4282] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" HandleID="k8s-pod-network.e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" Workload="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:37:56.766901 containerd[1595]: 2026-03-12 01:37:56.695 [INFO][4228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" Namespace="calico-system" Pod="calico-kube-controllers-6c984f8d9-nrkt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0", GenerateName:"calico-kube-controllers-6c984f8d9-", Namespace:"calico-system", SelfLink:"", UID:"5fb257b0-27b4-4ccb-bab4-86fe3218bc99", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c984f8d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6c984f8d9-nrkt8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali410349c6c43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:56.766901 containerd[1595]: 2026-03-12 01:37:56.695 [INFO][4228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" Namespace="calico-system" Pod="calico-kube-controllers-6c984f8d9-nrkt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:37:56.766901 containerd[1595]: 2026-03-12 01:37:56.696 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali410349c6c43 ContainerID="e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" Namespace="calico-system" Pod="calico-kube-controllers-6c984f8d9-nrkt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:37:56.766901 containerd[1595]: 2026-03-12 01:37:56.727 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" Namespace="calico-system" Pod="calico-kube-controllers-6c984f8d9-nrkt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:37:56.766901 containerd[1595]: 2026-03-12 01:37:56.728 [INFO][4228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" Namespace="calico-system" Pod="calico-kube-controllers-6c984f8d9-nrkt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0", GenerateName:"calico-kube-controllers-6c984f8d9-", Namespace:"calico-system", SelfLink:"", UID:"5fb257b0-27b4-4ccb-bab4-86fe3218bc99", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c984f8d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407", Pod:"calico-kube-controllers-6c984f8d9-nrkt8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali410349c6c43", MAC:"62:4d:5a:65:99:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:56.766901 containerd[1595]: 2026-03-12 01:37:56.745 [INFO][4228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407" Namespace="calico-system" Pod="calico-kube-controllers-6c984f8d9-nrkt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:37:56.826675 containerd[1595]: time="2026-03-12T01:37:56.826351384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bdf55d9-zlq7p,Uid:8827ea6d-6039-4f86-96be-28f12dc97ece,Namespace:calico-system,Attempt:1,} returns sandbox id \"f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc\"" Mar 12 01:37:56.898823 systemd-networkd[1251]: cali64d5ee35c22: Link UP Mar 12 01:37:56.899751 containerd[1595]: time="2026-03-12T01:37:56.899467304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:56.899751 containerd[1595]: time="2026-03-12T01:37:56.899529991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:56.899751 containerd[1595]: time="2026-03-12T01:37:56.899556671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:56.900326 systemd-networkd[1251]: cali64d5ee35c22: Gained carrier Mar 12 01:37:56.902792 containerd[1595]: time="2026-03-12T01:37:56.899976330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:56.921598 containerd[1595]: time="2026-03-12T01:37:56.921474218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d5bdf55d9-hgfnq,Uid:7120930c-3a55-44b0-911f-6bef14f82bc4,Namespace:calico-system,Attempt:1,} returns sandbox id \"03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb\"" Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.273 [ERROR][4248] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.302 [INFO][4248] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--cd8ml-eth0 goldmane-5b85766d88- calico-system a12164ec-a3c1-4b91-bb08-d78e4edbc1ad 963 0 2026-03-12 01:37:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-cd8ml eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali64d5ee35c22 [] [] }} ContainerID="afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" Namespace="calico-system" Pod="goldmane-5b85766d88-cd8ml" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--cd8ml-" Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.302 [INFO][4248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" Namespace="calico-system" Pod="goldmane-5b85766d88-cd8ml" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.394 [INFO][4277] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" HandleID="k8s-pod-network.afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" Workload="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.412 [INFO][4277] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" HandleID="k8s-pod-network.afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" Workload="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000511b00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-cd8ml", "timestamp":"2026-03-12 01:37:56.394518648 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00024b080)} Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.412 [INFO][4277] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.694 [INFO][4277] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.694 [INFO][4277] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.704 [INFO][4277] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" host="localhost" Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.742 [INFO][4277] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.762 [INFO][4277] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.769 [INFO][4277] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.774 [INFO][4277] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.775 [INFO][4277] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" host="localhost" Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.782 [INFO][4277] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.833 [INFO][4277] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" host="localhost" Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.851 [INFO][4277] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" host="localhost" Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.859 [INFO][4277] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" host="localhost" Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.860 [INFO][4277] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:37:56.930611 containerd[1595]: 2026-03-12 01:37:56.863 [INFO][4277] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" HandleID="k8s-pod-network.afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" Workload="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:37:56.931322 containerd[1595]: 2026-03-12 01:37:56.888 [INFO][4248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" Namespace="calico-system" Pod="goldmane-5b85766d88-cd8ml" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--cd8ml-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"a12164ec-a3c1-4b91-bb08-d78e4edbc1ad", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-cd8ml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali64d5ee35c22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:56.931322 containerd[1595]: 2026-03-12 01:37:56.891 [INFO][4248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" Namespace="calico-system" Pod="goldmane-5b85766d88-cd8ml" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:37:56.931322 containerd[1595]: 2026-03-12 01:37:56.892 [INFO][4248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64d5ee35c22 ContainerID="afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" Namespace="calico-system" Pod="goldmane-5b85766d88-cd8ml" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:37:56.931322 containerd[1595]: 2026-03-12 01:37:56.900 [INFO][4248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" Namespace="calico-system" Pod="goldmane-5b85766d88-cd8ml" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:37:56.931322 containerd[1595]: 2026-03-12 01:37:56.900 [INFO][4248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" Namespace="calico-system" Pod="goldmane-5b85766d88-cd8ml" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--cd8ml-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"a12164ec-a3c1-4b91-bb08-d78e4edbc1ad", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f", Pod:"goldmane-5b85766d88-cd8ml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali64d5ee35c22", MAC:"d2:d7:7d:12:21:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:56.931322 containerd[1595]: 2026-03-12 01:37:56.916 [INFO][4248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f" Namespace="calico-system" Pod="goldmane-5b85766d88-cd8ml" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:37:56.979777 containerd[1595]: time="2026-03-12T01:37:56.977295873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:56.979777 containerd[1595]: time="2026-03-12T01:37:56.977361084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:56.979777 containerd[1595]: time="2026-03-12T01:37:56.977385910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:56.979777 containerd[1595]: time="2026-03-12T01:37:56.977496024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:56.981552 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:37:56.989930 kubelet[2697]: I0312 01:37:56.987976 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/bbb169a7-0ffc-434b-a791-a17fa6538015-nginx-config\") pod \"whisker-955555796-nzd2q\" (UID: \"bbb169a7-0ffc-434b-a791-a17fa6538015\") " pod="calico-system/whisker-955555796-nzd2q" Mar 12 01:37:56.989930 kubelet[2697]: I0312 01:37:56.988020 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bbb169a7-0ffc-434b-a791-a17fa6538015-whisker-backend-key-pair\") pod \"whisker-955555796-nzd2q\" (UID: \"bbb169a7-0ffc-434b-a791-a17fa6538015\") " pod="calico-system/whisker-955555796-nzd2q" Mar 12 01:37:56.989930 kubelet[2697]: I0312 01:37:56.988055 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb169a7-0ffc-434b-a791-a17fa6538015-whisker-ca-bundle\") pod \"whisker-955555796-nzd2q\" (UID: \"bbb169a7-0ffc-434b-a791-a17fa6538015\") " pod="calico-system/whisker-955555796-nzd2q" Mar 12 01:37:56.989930 kubelet[2697]: I0312 01:37:56.988071 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fglfm\" (UniqueName: \"kubernetes.io/projected/bbb169a7-0ffc-434b-a791-a17fa6538015-kube-api-access-fglfm\") pod \"whisker-955555796-nzd2q\" (UID: \"bbb169a7-0ffc-434b-a791-a17fa6538015\") " pod="calico-system/whisker-955555796-nzd2q" Mar 12 01:37:57.011805 systemd-networkd[1251]: calia53e439d173: Link UP Mar 12 01:37:57.013207 systemd-networkd[1251]: calia53e439d173: Gained carrier Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.330 [ERROR][4220] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.349 [INFO][4220] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0 coredns-674b8bbfcf- kube-system 0da1befa-e568-43f6-8333-a51d79629123 961 0 2026-03-12 01:37:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-qhnmx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia53e439d173 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhnmx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qhnmx-" Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.349 [INFO][4220] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhnmx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.426 [INFO][4293] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" HandleID="k8s-pod-network.815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" Workload="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.450 [INFO][4293] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" HandleID="k8s-pod-network.815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" Workload="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a3a50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-qhnmx", "timestamp":"2026-03-12 01:37:56.426494612 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000473080)} Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.451 [INFO][4293] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.878 [INFO][4293] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.878 [INFO][4293] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.888 [INFO][4293] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" host="localhost" Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.903 [INFO][4293] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.915 [INFO][4293] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.923 [INFO][4293] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.927 [INFO][4293] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.927 [INFO][4293] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" host="localhost" Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.932 [INFO][4293] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637 Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.951 [INFO][4293] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" host="localhost" Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.987 [INFO][4293] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" host="localhost" Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.987 [INFO][4293] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" host="localhost" Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.987 [INFO][4293] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:37:57.065270 containerd[1595]: 2026-03-12 01:37:56.987 [INFO][4293] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" HandleID="k8s-pod-network.815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" Workload="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:37:57.066099 containerd[1595]: 2026-03-12 01:37:57.001 [INFO][4220] cni-plugin/k8s.go 418: Populated endpoint ContainerID="815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhnmx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0da1befa-e568-43f6-8333-a51d79629123", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-qhnmx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia53e439d173", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:57.066099 containerd[1595]: 2026-03-12 01:37:57.001 [INFO][4220] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhnmx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:37:57.066099 containerd[1595]: 2026-03-12 01:37:57.001 [INFO][4220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia53e439d173 ContainerID="815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhnmx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:37:57.066099 containerd[1595]: 2026-03-12 01:37:57.018 [INFO][4220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhnmx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:37:57.066099 containerd[1595]: 2026-03-12 01:37:57.022 [INFO][4220] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhnmx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0da1befa-e568-43f6-8333-a51d79629123", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637", Pod:"coredns-674b8bbfcf-qhnmx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia53e439d173", MAC:"42:13:35:78:e2:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:57.066099 containerd[1595]: 2026-03-12 01:37:57.041 [INFO][4220] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637" Namespace="kube-system" Pod="coredns-674b8bbfcf-qhnmx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:37:57.073338 containerd[1595]: time="2026-03-12T01:37:57.073110776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7hmdg,Uid:e561d6e9-adb9-4958-8e4a-34467004f252,Namespace:kube-system,Attempt:1,} returns sandbox id \"0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa\"" Mar 12 01:37:57.074310 kubelet[2697]: E0312 01:37:57.074213 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:57.096371 containerd[1595]: time="2026-03-12T01:37:57.095700705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:57.099155 containerd[1595]: time="2026-03-12T01:37:57.098767162Z" level=info msg="CreateContainer within sandbox \"0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:37:57.105077 containerd[1595]: time="2026-03-12T01:37:57.104829173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:57.105570 containerd[1595]: time="2026-03-12T01:37:57.105436591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:57.109174 containerd[1595]: time="2026-03-12T01:37:57.108610337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:57.142454 containerd[1595]: time="2026-03-12T01:37:57.142348521Z" level=info msg="CreateContainer within sandbox \"0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf243d740d1f4c9e366a2dae61002129d3065ffda8038a62f040127fb4174c92\"" Mar 12 01:37:57.146940 containerd[1595]: time="2026-03-12T01:37:57.146842139Z" level=info msg="StartContainer for \"bf243d740d1f4c9e366a2dae61002129d3065ffda8038a62f040127fb4174c92\"" Mar 12 01:37:57.151402 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:37:57.159854 containerd[1595]: time="2026-03-12T01:37:57.157935476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:57.159854 containerd[1595]: time="2026-03-12T01:37:57.158458718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:57.159854 containerd[1595]: time="2026-03-12T01:37:57.158487723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:57.159854 containerd[1595]: time="2026-03-12T01:37:57.158911169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:57.182415 containerd[1595]: time="2026-03-12T01:37:57.182317985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-955555796-nzd2q,Uid:bbb169a7-0ffc-434b-a791-a17fa6538015,Namespace:calico-system,Attempt:0,}" Mar 12 01:37:57.198550 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:37:57.236833 containerd[1595]: time="2026-03-12T01:37:57.235763801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:57.238549 containerd[1595]: time="2026-03-12T01:37:57.238516345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 12 01:37:57.241712 containerd[1595]: time="2026-03-12T01:37:57.241576880Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:57.245504 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:37:57.246111 containerd[1595]: time="2026-03-12T01:37:57.246031766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:57.247976 containerd[1595]: time="2026-03-12T01:37:57.247474506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.567680822s" Mar 12 01:37:57.247976 containerd[1595]: time="2026-03-12T01:37:57.247527263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 12 01:37:57.254016 containerd[1595]: time="2026-03-12T01:37:57.253767093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 01:37:57.262293 containerd[1595]: time="2026-03-12T01:37:57.262214929Z" level=info msg="CreateContainer within sandbox \"6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 12 01:37:57.267331 containerd[1595]: time="2026-03-12T01:37:57.267304230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c984f8d9-nrkt8,Uid:5fb257b0-27b4-4ccb-bab4-86fe3218bc99,Namespace:calico-system,Attempt:1,} returns sandbox id \"e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407\"" Mar 12 01:37:57.279078 containerd[1595]: time="2026-03-12T01:37:57.278981679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qhnmx,Uid:0da1befa-e568-43f6-8333-a51d79629123,Namespace:kube-system,Attempt:1,} returns sandbox id \"815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637\"" Mar 12 01:37:57.282324 kubelet[2697]: E0312 01:37:57.282213 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:57.289311 containerd[1595]: time="2026-03-12T01:37:57.289271728Z" level=info msg="CreateContainer within sandbox \"815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:37:57.331700 containerd[1595]: time="2026-03-12T01:37:57.331497569Z" level=info msg="CreateContainer within sandbox \"815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a5e93d219798d69ff583b665d2d38428361a5abd00ae2032c8ae2854996f8e7\"" Mar 12 01:37:57.333353 containerd[1595]: time="2026-03-12T01:37:57.332475215Z" level=info msg="StartContainer for \"3a5e93d219798d69ff583b665d2d38428361a5abd00ae2032c8ae2854996f8e7\"" Mar 12 01:37:57.333696 kernel: calico-node[4384]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 12 01:37:57.337817 containerd[1595]: time="2026-03-12T01:37:57.337780731Z" level=info msg="CreateContainer within sandbox \"6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f16bd44d3bdaac0b197ea6400f7d49c3b2a7770843ebed704cddfe984ea5a11b\"" Mar 12 01:37:57.338606 containerd[1595]: time="2026-03-12T01:37:57.338583582Z" level=info msg="StartContainer for \"f16bd44d3bdaac0b197ea6400f7d49c3b2a7770843ebed704cddfe984ea5a11b\"" Mar 12 01:37:57.386473 containerd[1595]: time="2026-03-12T01:37:57.385466367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-cd8ml,Uid:a12164ec-a3c1-4b91-bb08-d78e4edbc1ad,Namespace:calico-system,Attempt:1,} returns sandbox id \"afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f\"" Mar 12 01:37:57.468308 containerd[1595]: time="2026-03-12T01:37:57.468190029Z" level=info msg="StartContainer for \"bf243d740d1f4c9e366a2dae61002129d3065ffda8038a62f040127fb4174c92\" returns successfully" Mar 12 01:37:57.562407 containerd[1595]: time="2026-03-12T01:37:57.562306423Z" level=info msg="StartContainer for \"3a5e93d219798d69ff583b665d2d38428361a5abd00ae2032c8ae2854996f8e7\" returns successfully" Mar 12 01:37:57.583996 systemd-networkd[1251]: cali0a03cf32e34: Gained IPv6LL Mar 12 01:37:57.649509 systemd-networkd[1251]: calib3caf480b1a: Gained IPv6LL Mar 12 01:37:57.690396 containerd[1595]: time="2026-03-12T01:37:57.690336123Z" level=info msg="StartContainer for \"f16bd44d3bdaac0b197ea6400f7d49c3b2a7770843ebed704cddfe984ea5a11b\" returns successfully" Mar 12 01:37:57.760057 systemd-networkd[1251]: calia137ed2d2d4: Link UP Mar 12 01:37:57.763000 systemd-networkd[1251]: calia137ed2d2d4: Gained carrier Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.365 [INFO][4742] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--955555796--nzd2q-eth0 whisker-955555796- calico-system bbb169a7-0ffc-434b-a791-a17fa6538015 1000 0 2026-03-12 01:37:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:955555796 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-955555796-nzd2q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia137ed2d2d4 [] [] }} ContainerID="ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" Namespace="calico-system" Pod="whisker-955555796-nzd2q" WorkloadEndpoint="localhost-k8s-whisker--955555796--nzd2q-" Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.365 [INFO][4742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" Namespace="calico-system" Pod="whisker-955555796-nzd2q" WorkloadEndpoint="localhost-k8s-whisker--955555796--nzd2q-eth0" Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.647 [INFO][4845] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" HandleID="k8s-pod-network.ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" Workload="localhost-k8s-whisker--955555796--nzd2q-eth0" Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.664 [INFO][4845] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" HandleID="k8s-pod-network.ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" Workload="localhost-k8s-whisker--955555796--nzd2q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000276080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-955555796-nzd2q", "timestamp":"2026-03-12 01:37:57.647132242 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00047d600)} Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.664 [INFO][4845] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.664 [INFO][4845] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.664 [INFO][4845] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.673 [INFO][4845] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" host="localhost" Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.682 [INFO][4845] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.694 [INFO][4845] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.703 [INFO][4845] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.707 [INFO][4845] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.708 [INFO][4845] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" host="localhost" Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.710 [INFO][4845] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12 Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.719 [INFO][4845] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" host="localhost" Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.736 [INFO][4845] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" host="localhost" Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.737 [INFO][4845] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" host="localhost" Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.738 [INFO][4845] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:37:57.794298 containerd[1595]: 2026-03-12 01:37:57.738 [INFO][4845] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" HandleID="k8s-pod-network.ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" Workload="localhost-k8s-whisker--955555796--nzd2q-eth0" Mar 12 01:37:57.795939 containerd[1595]: 2026-03-12 01:37:57.753 [INFO][4742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" Namespace="calico-system" Pod="whisker-955555796-nzd2q" WorkloadEndpoint="localhost-k8s-whisker--955555796--nzd2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--955555796--nzd2q-eth0", GenerateName:"whisker-955555796-", Namespace:"calico-system", SelfLink:"", UID:"bbb169a7-0ffc-434b-a791-a17fa6538015", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"955555796", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-955555796-nzd2q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia137ed2d2d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:57.795939 containerd[1595]: 2026-03-12 01:37:57.755 [INFO][4742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" Namespace="calico-system" Pod="whisker-955555796-nzd2q" WorkloadEndpoint="localhost-k8s-whisker--955555796--nzd2q-eth0" Mar 12 01:37:57.795939 containerd[1595]: 2026-03-12 01:37:57.755 [INFO][4742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia137ed2d2d4 ContainerID="ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" Namespace="calico-system" Pod="whisker-955555796-nzd2q" WorkloadEndpoint="localhost-k8s-whisker--955555796--nzd2q-eth0" Mar 12 01:37:57.795939 containerd[1595]: 2026-03-12 01:37:57.762 [INFO][4742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" Namespace="calico-system" Pod="whisker-955555796-nzd2q" WorkloadEndpoint="localhost-k8s-whisker--955555796--nzd2q-eth0" Mar 12 01:37:57.795939 containerd[1595]: 2026-03-12 01:37:57.762 [INFO][4742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" Namespace="calico-system" Pod="whisker-955555796-nzd2q" WorkloadEndpoint="localhost-k8s-whisker--955555796--nzd2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--955555796--nzd2q-eth0", GenerateName:"whisker-955555796-", Namespace:"calico-system", SelfLink:"", UID:"bbb169a7-0ffc-434b-a791-a17fa6538015", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"955555796", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12", Pod:"whisker-955555796-nzd2q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia137ed2d2d4", MAC:"4a:10:76:37:94:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:37:57.795939 containerd[1595]: 2026-03-12 01:37:57.787 [INFO][4742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12" Namespace="calico-system" Pod="whisker-955555796-nzd2q" WorkloadEndpoint="localhost-k8s-whisker--955555796--nzd2q-eth0" Mar 12 01:37:57.815583 kubelet[2697]: E0312 01:37:57.815543 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:57.831726 kubelet[2697]: E0312 01:37:57.830846 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:57.871799 kubelet[2697]: I0312 01:37:57.871529 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qhnmx" podStartSLOduration=28.87150807 podStartE2EDuration="28.87150807s" podCreationTimestamp="2026-03-12 01:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:37:57.847139703 +0000 UTC m=+35.619604343" watchObservedRunningTime="2026-03-12 01:37:57.87150807 +0000 UTC m=+35.643972711" Mar 12 01:37:57.892802 containerd[1595]: time="2026-03-12T01:37:57.889281183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:57.892802 containerd[1595]: time="2026-03-12T01:37:57.889363487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:57.892802 containerd[1595]: time="2026-03-12T01:37:57.889392520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:57.892802 containerd[1595]: time="2026-03-12T01:37:57.889542760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:57.911921 kubelet[2697]: I0312 01:37:57.911534 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7hmdg" podStartSLOduration=28.911506414 podStartE2EDuration="28.911506414s" podCreationTimestamp="2026-03-12 01:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:37:57.873773148 +0000 UTC m=+35.646237808" watchObservedRunningTime="2026-03-12 01:37:57.911506414 +0000 UTC m=+35.683971054" Mar 12 01:37:57.967990 systemd-networkd[1251]: calidf97804c3ce: Gained IPv6LL Mar 12 01:37:58.009730 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:37:58.064492 containerd[1595]: time="2026-03-12T01:37:58.064429970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-955555796-nzd2q,Uid:bbb169a7-0ffc-434b-a791-a17fa6538015,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12\"" Mar 12 01:37:58.293075 systemd-journald[1177]: Under memory pressure, flushing caches. Mar 12 01:37:58.287787 systemd-resolved[1485]: Under memory pressure, flushing caches. Mar 12 01:37:58.287843 systemd-resolved[1485]: Flushed all caches. Mar 12 01:37:58.311583 systemd-networkd[1251]: vxlan.calico: Link UP Mar 12 01:37:58.312108 systemd-networkd[1251]: vxlan.calico: Gained carrier Mar 12 01:37:58.368673 kubelet[2697]: I0312 01:37:58.368535 2697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="754c305b-d35b-428a-a925-3c62be46c832" path="/var/lib/kubelet/pods/754c305b-d35b-428a-a925-3c62be46c832/volumes" Mar 12 01:37:58.736923 systemd-networkd[1251]: cali64d5ee35c22: Gained IPv6LL Mar 12 01:37:58.737945 systemd-networkd[1251]: cali410349c6c43: Gained IPv6LL Mar 12 01:37:58.832665 kubelet[2697]: E0312 01:37:58.832575 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:58.833375 kubelet[2697]: E0312 01:37:58.833333 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:58.866038 systemd-networkd[1251]: calia53e439d173: Gained IPv6LL Mar 12 01:37:59.220575 containerd[1595]: time="2026-03-12T01:37:59.220429602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:59.225676 containerd[1595]: time="2026-03-12T01:37:59.223450769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 12 01:37:59.226356 containerd[1595]: time="2026-03-12T01:37:59.226238549Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:59.232817 containerd[1595]: time="2026-03-12T01:37:59.232717294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:59.233896 containerd[1595]: time="2026-03-12T01:37:59.233824501Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.979856314s" Mar 12 01:37:59.233995 containerd[1595]: time="2026-03-12T01:37:59.233901984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 12 01:37:59.245818 containerd[1595]: time="2026-03-12T01:37:59.245199072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 01:37:59.253080 containerd[1595]: time="2026-03-12T01:37:59.252292406Z" level=info msg="CreateContainer within sandbox \"f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 01:37:59.276450 containerd[1595]: time="2026-03-12T01:37:59.276374757Z" level=info msg="CreateContainer within sandbox \"f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8ed957783992b72c7b83e583f8d1ebf23bb49005d3a06170d5fb1c243f07460a\"" Mar 12 01:37:59.277456 containerd[1595]: time="2026-03-12T01:37:59.277421694Z" level=info msg="StartContainer for \"8ed957783992b72c7b83e583f8d1ebf23bb49005d3a06170d5fb1c243f07460a\"" Mar 12 01:37:59.321974 systemd[1]: run-containerd-runc-k8s.io-8ed957783992b72c7b83e583f8d1ebf23bb49005d3a06170d5fb1c243f07460a-runc.xGvVG7.mount: Deactivated successfully. Mar 12 01:37:59.361200 containerd[1595]: time="2026-03-12T01:37:59.360985813Z" level=info msg="StartContainer for \"8ed957783992b72c7b83e583f8d1ebf23bb49005d3a06170d5fb1c243f07460a\" returns successfully" Mar 12 01:37:59.378612 containerd[1595]: time="2026-03-12T01:37:59.378490392Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:59.379767 containerd[1595]: time="2026-03-12T01:37:59.379670093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 12 01:37:59.382327 containerd[1595]: time="2026-03-12T01:37:59.382203234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 136.961543ms" Mar 12 01:37:59.382327 containerd[1595]: time="2026-03-12T01:37:59.382287811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 12 01:37:59.385340 containerd[1595]: time="2026-03-12T01:37:59.385310587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 12 01:37:59.390665 containerd[1595]: time="2026-03-12T01:37:59.390585205Z" level=info msg="CreateContainer within sandbox \"03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 01:37:59.414332 containerd[1595]: time="2026-03-12T01:37:59.414243080Z" level=info msg="CreateContainer within sandbox \"03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"639362a05500a48362269a6117764acf7371fd134297895dfd7011a242d4252e\"" Mar 12 01:37:59.417190 containerd[1595]: time="2026-03-12T01:37:59.417029971Z" level=info msg="StartContainer for \"639362a05500a48362269a6117764acf7371fd134297895dfd7011a242d4252e\"" Mar 12 01:37:59.443720 systemd-networkd[1251]: calia137ed2d2d4: Gained IPv6LL Mar 12 01:37:59.492118 containerd[1595]: time="2026-03-12T01:37:59.491828911Z" level=info msg="StartContainer for \"639362a05500a48362269a6117764acf7371fd134297895dfd7011a242d4252e\" returns successfully" Mar 12 01:37:59.863984 kubelet[2697]: E0312 01:37:59.860765 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:59.863984 kubelet[2697]: E0312 01:37:59.863003 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:59.906130 kubelet[2697]: I0312 01:37:59.904520 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-d5bdf55d9-hgfnq" podStartSLOduration=15.461222253 podStartE2EDuration="17.904410215s" podCreationTimestamp="2026-03-12 01:37:42 +0000 UTC" firstStartedPulling="2026-03-12 01:37:56.940131686 +0000 UTC m=+34.712596325" lastFinishedPulling="2026-03-12 01:37:59.383319637 +0000 UTC m=+37.155784287" observedRunningTime="2026-03-12 01:37:59.882447364 +0000 UTC m=+37.654912004" watchObservedRunningTime="2026-03-12 01:37:59.904410215 +0000 UTC m=+37.676874865" Mar 12 01:37:59.922081 kubelet[2697]: I0312 01:37:59.921973 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-d5bdf55d9-zlq7p" podStartSLOduration=15.5253795 podStartE2EDuration="17.921958193s" podCreationTimestamp="2026-03-12 01:37:42 +0000 UTC" firstStartedPulling="2026-03-12 01:37:56.843223937 +0000 UTC m=+34.615688577" lastFinishedPulling="2026-03-12 01:37:59.23980262 +0000 UTC m=+37.012267270" observedRunningTime="2026-03-12 01:37:59.914157372 +0000 UTC m=+37.686622012" watchObservedRunningTime="2026-03-12 01:37:59.921958193 +0000 UTC m=+37.694422833" Mar 12 01:38:00.207988 systemd-networkd[1251]: vxlan.calico: Gained IPv6LL Mar 12 01:38:00.896992 kubelet[2697]: I0312 01:38:00.896906 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:00.897728 kubelet[2697]: I0312 01:38:00.897138 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:00.898976 kubelet[2697]: E0312 01:38:00.898448 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:01.215965 containerd[1595]: time="2026-03-12T01:38:01.215765741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:01.217586 containerd[1595]: time="2026-03-12T01:38:01.217480870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 12 01:38:01.218484 containerd[1595]: time="2026-03-12T01:38:01.218414236Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:01.223047 containerd[1595]: time="2026-03-12T01:38:01.222988807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:01.223730 containerd[1595]: time="2026-03-12T01:38:01.223684631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.837690263s" Mar 12 01:38:01.223782 containerd[1595]: time="2026-03-12T01:38:01.223738762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 12 01:38:01.227104 containerd[1595]: time="2026-03-12T01:38:01.226928071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 12 01:38:01.247041 containerd[1595]: time="2026-03-12T01:38:01.247009213Z" level=info msg="CreateContainer within sandbox \"e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 12 01:38:01.285030 containerd[1595]: time="2026-03-12T01:38:01.284953103Z" level=info msg="CreateContainer within sandbox \"e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"10dc1aa87d3e46aba8c249f479303df91f5bff85128395c3deed88c9affc7d88\"" Mar 12 01:38:01.285729 containerd[1595]: time="2026-03-12T01:38:01.285615899Z" level=info msg="StartContainer for \"10dc1aa87d3e46aba8c249f479303df91f5bff85128395c3deed88c9affc7d88\"" Mar 12 01:38:01.319315 systemd[1]: run-containerd-runc-k8s.io-10dc1aa87d3e46aba8c249f479303df91f5bff85128395c3deed88c9affc7d88-runc.0QjXly.mount: Deactivated successfully. Mar 12 01:38:01.378587 containerd[1595]: time="2026-03-12T01:38:01.378447975Z" level=info msg="StartContainer for \"10dc1aa87d3e46aba8c249f479303df91f5bff85128395c3deed88c9affc7d88\" returns successfully" Mar 12 01:38:02.009380 kubelet[2697]: I0312 01:38:02.009247 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c984f8d9-nrkt8" podStartSLOduration=15.054267407 podStartE2EDuration="19.009228455s" podCreationTimestamp="2026-03-12 01:37:43 +0000 UTC" firstStartedPulling="2026-03-12 01:37:57.270026577 +0000 UTC m=+35.042491218" lastFinishedPulling="2026-03-12 01:38:01.224987615 +0000 UTC m=+38.997452266" observedRunningTime="2026-03-12 01:38:01.91625899 +0000 UTC m=+39.688723640" watchObservedRunningTime="2026-03-12 01:38:02.009228455 +0000 UTC m=+39.781693115" Mar 12 01:38:02.279540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount109251847.mount: Deactivated successfully. Mar 12 01:38:02.692183 containerd[1595]: time="2026-03-12T01:38:02.691985472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:02.693493 containerd[1595]: time="2026-03-12T01:38:02.693436556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 12 01:38:02.694992 containerd[1595]: time="2026-03-12T01:38:02.694945632Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:02.697714 containerd[1595]: time="2026-03-12T01:38:02.697682069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:02.698402 containerd[1595]: time="2026-03-12T01:38:02.698341006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.471383942s" Mar 12 01:38:02.698402 containerd[1595]: time="2026-03-12T01:38:02.698386732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 12 01:38:02.699453 containerd[1595]: time="2026-03-12T01:38:02.699386331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 12 01:38:02.711791 containerd[1595]: time="2026-03-12T01:38:02.711711910Z" level=info msg="CreateContainer within sandbox \"afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 12 01:38:02.732028 containerd[1595]: time="2026-03-12T01:38:02.731967659Z" level=info msg="CreateContainer within sandbox \"afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"53f26bccfbcad1a9ab2456f14534b9535b2632084208c090eee5d728b82e8e98\"" Mar 12 01:38:02.734618 containerd[1595]: time="2026-03-12T01:38:02.734031889Z" level=info msg="StartContainer for \"53f26bccfbcad1a9ab2456f14534b9535b2632084208c090eee5d728b82e8e98\"" Mar 12 01:38:02.841037 containerd[1595]: time="2026-03-12T01:38:02.840940420Z" level=info msg="StartContainer for \"53f26bccfbcad1a9ab2456f14534b9535b2632084208c090eee5d728b82e8e98\" returns successfully" Mar 12 01:38:02.933946 kubelet[2697]: I0312 01:38:02.933845 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-cd8ml" podStartSLOduration=15.626636569 podStartE2EDuration="20.933831032s" podCreationTimestamp="2026-03-12 01:37:42 +0000 UTC" firstStartedPulling="2026-03-12 01:37:57.39200279 +0000 UTC m=+35.164467430" lastFinishedPulling="2026-03-12 01:38:02.699197252 +0000 UTC m=+40.471661893" observedRunningTime="2026-03-12 01:38:02.933518992 +0000 UTC m=+40.705983732" watchObservedRunningTime="2026-03-12 01:38:02.933831032 +0000 UTC m=+40.706295673" Mar 12 01:38:03.293224 containerd[1595]: time="2026-03-12T01:38:03.293131208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:03.294194 containerd[1595]: time="2026-03-12T01:38:03.294082020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 12 01:38:03.295296 containerd[1595]: time="2026-03-12T01:38:03.295232129Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:03.298033 containerd[1595]: time="2026-03-12T01:38:03.297970052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:03.299065 containerd[1595]: time="2026-03-12T01:38:03.299001611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 599.587117ms" Mar 12 01:38:03.299065 containerd[1595]: time="2026-03-12T01:38:03.299041105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 12 01:38:03.300174 containerd[1595]: time="2026-03-12T01:38:03.300142058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 12 01:38:03.303927 containerd[1595]: time="2026-03-12T01:38:03.303880871Z" level=info msg="CreateContainer within sandbox \"6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 12 01:38:03.327022 containerd[1595]: time="2026-03-12T01:38:03.326946151Z" level=info msg="CreateContainer within sandbox \"6ff127c535ffeeb4992be774891ad4f72366b60767842feaad5e5a7f737c9bc4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3ae2cabd21aa5beecc7dcb8909c7839d530a5c229a508698aef821ef08659303\"" Mar 12 01:38:03.327793 containerd[1595]: time="2026-03-12T01:38:03.327752898Z" level=info msg="StartContainer for \"3ae2cabd21aa5beecc7dcb8909c7839d530a5c229a508698aef821ef08659303\"" Mar 12 01:38:03.406866 containerd[1595]: time="2026-03-12T01:38:03.406741357Z" level=info msg="StartContainer for \"3ae2cabd21aa5beecc7dcb8909c7839d530a5c229a508698aef821ef08659303\" returns successfully" Mar 12 01:38:03.559008 kubelet[2697]: I0312 01:38:03.558869 2697 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 12 01:38:03.562582 kubelet[2697]: I0312 01:38:03.562563 2697 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 12 01:38:03.916020 containerd[1595]: time="2026-03-12T01:38:03.915765908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:03.917052 containerd[1595]: time="2026-03-12T01:38:03.916893090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 12 01:38:03.918504 containerd[1595]: time="2026-03-12T01:38:03.918431746Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:03.921012 containerd[1595]: time="2026-03-12T01:38:03.920942733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:03.921839 containerd[1595]: time="2026-03-12T01:38:03.921586777Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 621.403763ms" Mar 12 01:38:03.921839 containerd[1595]: time="2026-03-12T01:38:03.921611042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 12 01:38:03.929153 containerd[1595]: time="2026-03-12T01:38:03.929067797Z" level=info msg="CreateContainer within sandbox \"ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 12 01:38:03.962039 containerd[1595]: time="2026-03-12T01:38:03.961950624Z" level=info msg="CreateContainer within sandbox \"ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"29773c1bbc3d22a3d06e35228a1df8e71e9bfee65fa44544780ba14540769267\"" Mar 12 01:38:03.963711 containerd[1595]: time="2026-03-12T01:38:03.962686065Z" level=info msg="StartContainer for \"29773c1bbc3d22a3d06e35228a1df8e71e9bfee65fa44544780ba14540769267\"" Mar 12 01:38:04.056133 containerd[1595]: time="2026-03-12T01:38:04.056066667Z" level=info msg="StartContainer for \"29773c1bbc3d22a3d06e35228a1df8e71e9bfee65fa44544780ba14540769267\" returns successfully" Mar 12 01:38:04.057706 containerd[1595]: time="2026-03-12T01:38:04.057584595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 12 01:38:04.239909 systemd-resolved[1485]: Under memory pressure, flushing caches. Mar 12 01:38:04.243009 systemd-journald[1177]: Under memory pressure, flushing caches. Mar 12 01:38:04.239947 systemd-resolved[1485]: Flushed all caches. Mar 12 01:38:04.821248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4014031436.mount: Deactivated successfully. Mar 12 01:38:04.983987 containerd[1595]: time="2026-03-12T01:38:04.983586930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:04.984999 containerd[1595]: time="2026-03-12T01:38:04.984915339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 12 01:38:04.986652 containerd[1595]: time="2026-03-12T01:38:04.986568774Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:04.989285 containerd[1595]: time="2026-03-12T01:38:04.989187028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:04.990503 containerd[1595]: time="2026-03-12T01:38:04.990444350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 932.646097ms" Mar 12 01:38:04.990550 containerd[1595]: time="2026-03-12T01:38:04.990501096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 12 01:38:04.998203 containerd[1595]: time="2026-03-12T01:38:04.998141098Z" level=info msg="CreateContainer within sandbox \"ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 12 01:38:05.053339 containerd[1595]: time="2026-03-12T01:38:05.053251458Z" level=info msg="CreateContainer within sandbox \"ab0366eab60bea5e8e521df7e9a5932c42dcb5e26281a792dab77e957a8ddd12\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"be8dfb58760094ad4d6bab1a6396e9e82f035e286e3fe2e3e0977cac25349f3a\"" Mar 12 01:38:05.054175 containerd[1595]: time="2026-03-12T01:38:05.054144365Z" level=info msg="StartContainer for \"be8dfb58760094ad4d6bab1a6396e9e82f035e286e3fe2e3e0977cac25349f3a\"" Mar 12 01:38:05.160199 containerd[1595]: time="2026-03-12T01:38:05.160028565Z" level=info msg="StartContainer for \"be8dfb58760094ad4d6bab1a6396e9e82f035e286e3fe2e3e0977cac25349f3a\" returns successfully" Mar 12 01:38:05.954580 kubelet[2697]: I0312 01:38:05.954493 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-84r2m" podStartSLOduration=15.331311343 podStartE2EDuration="22.954474058s" podCreationTimestamp="2026-03-12 01:37:43 +0000 UTC" firstStartedPulling="2026-03-12 01:37:55.676797993 +0000 UTC m=+33.449262633" lastFinishedPulling="2026-03-12 01:38:03.299960708 +0000 UTC m=+41.072425348" observedRunningTime="2026-03-12 01:38:03.944761379 +0000 UTC m=+41.717226029" watchObservedRunningTime="2026-03-12 01:38:05.954474058 +0000 UTC m=+43.726938699" Mar 12 01:38:05.955264 kubelet[2697]: I0312 01:38:05.954961 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-955555796-nzd2q" podStartSLOduration=3.028806838 podStartE2EDuration="9.954951837s" podCreationTimestamp="2026-03-12 01:37:56 +0000 UTC" firstStartedPulling="2026-03-12 01:37:58.065935093 +0000 UTC m=+35.838399733" lastFinishedPulling="2026-03-12 01:38:04.992080092 +0000 UTC m=+42.764544732" observedRunningTime="2026-03-12 01:38:05.952523427 +0000 UTC m=+43.724988067" watchObservedRunningTime="2026-03-12 01:38:05.954951837 +0000 UTC m=+43.727416478" Mar 12 01:38:06.737048 systemd[1]: Started sshd@7-10.0.0.145:22-10.0.0.1:60080.service - OpenSSH per-connection server daemon (10.0.0.1:60080). Mar 12 01:38:06.803693 sshd[5481]: Accepted publickey for core from 10.0.0.1 port 60080 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:06.806496 sshd[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:06.814377 systemd-logind[1574]: New session 8 of user core. Mar 12 01:38:06.820010 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 01:38:07.250166 sshd[5481]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:07.254374 systemd[1]: sshd@7-10.0.0.145:22-10.0.0.1:60080.service: Deactivated successfully. Mar 12 01:38:07.257136 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 01:38:07.257266 systemd-logind[1574]: Session 8 logged out. Waiting for processes to exit. Mar 12 01:38:07.259058 systemd-logind[1574]: Removed session 8. Mar 12 01:38:12.260365 systemd[1]: Started sshd@8-10.0.0.145:22-10.0.0.1:43194.service - OpenSSH per-connection server daemon (10.0.0.1:43194). Mar 12 01:38:12.316907 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 43194 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:12.318982 sshd[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:12.323948 systemd-logind[1574]: New session 9 of user core. Mar 12 01:38:12.338968 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 01:38:12.473152 sshd[5531]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:12.478056 systemd[1]: sshd@8-10.0.0.145:22-10.0.0.1:43194.service: Deactivated successfully. Mar 12 01:38:12.480536 systemd-logind[1574]: Session 9 logged out. Waiting for processes to exit. Mar 12 01:38:12.480599 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 01:38:12.481915 systemd-logind[1574]: Removed session 9. Mar 12 01:38:16.724120 kubelet[2697]: I0312 01:38:16.723934 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:17.483910 systemd[1]: Started sshd@9-10.0.0.145:22-10.0.0.1:43198.service - OpenSSH per-connection server daemon (10.0.0.1:43198). Mar 12 01:38:17.532025 sshd[5554]: Accepted publickey for core from 10.0.0.1 port 43198 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:17.533814 sshd[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:17.538193 systemd-logind[1574]: New session 10 of user core. Mar 12 01:38:17.552116 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 01:38:17.705847 sshd[5554]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:17.710573 systemd[1]: sshd@9-10.0.0.145:22-10.0.0.1:43198.service: Deactivated successfully. Mar 12 01:38:17.713150 systemd-logind[1574]: Session 10 logged out. Waiting for processes to exit. Mar 12 01:38:17.713161 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 01:38:17.714584 systemd-logind[1574]: Removed session 10. Mar 12 01:38:22.343689 containerd[1595]: time="2026-03-12T01:38:22.341903293Z" level=info msg="StopPodSandbox for \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\"" Mar 12 01:38:22.509421 containerd[1595]: 2026-03-12 01:38:22.418 [WARNING][5616] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e561d6e9-adb9-4958-8e4a-34467004f252", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa", Pod:"coredns-674b8bbfcf-7hmdg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf97804c3ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:22.509421 containerd[1595]: 2026-03-12 01:38:22.419 [INFO][5616] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Mar 12 01:38:22.509421 containerd[1595]: 2026-03-12 01:38:22.419 [INFO][5616] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" iface="eth0" netns="" Mar 12 01:38:22.509421 containerd[1595]: 2026-03-12 01:38:22.419 [INFO][5616] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Mar 12 01:38:22.509421 containerd[1595]: 2026-03-12 01:38:22.419 [INFO][5616] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Mar 12 01:38:22.509421 containerd[1595]: 2026-03-12 01:38:22.483 [INFO][5627] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" HandleID="k8s-pod-network.54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Workload="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:38:22.509421 containerd[1595]: 2026-03-12 01:38:22.486 [INFO][5627] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:22.509421 containerd[1595]: 2026-03-12 01:38:22.486 [INFO][5627] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:22.509421 containerd[1595]: 2026-03-12 01:38:22.497 [WARNING][5627] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" HandleID="k8s-pod-network.54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Workload="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:38:22.509421 containerd[1595]: 2026-03-12 01:38:22.497 [INFO][5627] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" HandleID="k8s-pod-network.54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Workload="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:38:22.509421 containerd[1595]: 2026-03-12 01:38:22.498 [INFO][5627] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:22.509421 containerd[1595]: 2026-03-12 01:38:22.503 [INFO][5616] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Mar 12 01:38:22.509421 containerd[1595]: time="2026-03-12T01:38:22.509410173Z" level=info msg="TearDown network for sandbox \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\" successfully" Mar 12 01:38:22.515132 containerd[1595]: time="2026-03-12T01:38:22.515033654Z" level=info msg="StopPodSandbox for \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\" returns successfully" Mar 12 01:38:22.554061 kubelet[2697]: I0312 01:38:22.553961 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:22.555324 containerd[1595]: time="2026-03-12T01:38:22.555230646Z" level=info msg="RemovePodSandbox for \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\"" Mar 12 01:38:22.559405 containerd[1595]: time="2026-03-12T01:38:22.559340516Z" level=info msg="Forcibly stopping sandbox \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\"" Mar 12 01:38:22.662187 containerd[1595]: 2026-03-12 01:38:22.615 [WARNING][5644] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e561d6e9-adb9-4958-8e4a-34467004f252", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f13d3d656a9533814ba70e386f9f443fb452db70ff0e5ce74ee01a7b474c7fa", Pod:"coredns-674b8bbfcf-7hmdg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf97804c3ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:22.662187 containerd[1595]: 2026-03-12 01:38:22.616 [INFO][5644] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Mar 12 01:38:22.662187 containerd[1595]: 2026-03-12 01:38:22.616 [INFO][5644] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" iface="eth0" netns="" Mar 12 01:38:22.662187 containerd[1595]: 2026-03-12 01:38:22.616 [INFO][5644] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Mar 12 01:38:22.662187 containerd[1595]: 2026-03-12 01:38:22.616 [INFO][5644] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Mar 12 01:38:22.662187 containerd[1595]: 2026-03-12 01:38:22.643 [INFO][5655] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" HandleID="k8s-pod-network.54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Workload="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:38:22.662187 containerd[1595]: 2026-03-12 01:38:22.644 [INFO][5655] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:22.662187 containerd[1595]: 2026-03-12 01:38:22.644 [INFO][5655] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:22.662187 containerd[1595]: 2026-03-12 01:38:22.653 [WARNING][5655] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" HandleID="k8s-pod-network.54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Workload="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:38:22.662187 containerd[1595]: 2026-03-12 01:38:22.653 [INFO][5655] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" HandleID="k8s-pod-network.54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Workload="localhost-k8s-coredns--674b8bbfcf--7hmdg-eth0" Mar 12 01:38:22.662187 containerd[1595]: 2026-03-12 01:38:22.655 [INFO][5655] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:22.662187 containerd[1595]: 2026-03-12 01:38:22.659 [INFO][5644] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019" Mar 12 01:38:22.662187 containerd[1595]: time="2026-03-12T01:38:22.662099207Z" level=info msg="TearDown network for sandbox \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\" successfully" Mar 12 01:38:22.683210 containerd[1595]: time="2026-03-12T01:38:22.683106541Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:38:22.683344 containerd[1595]: time="2026-03-12T01:38:22.683236754Z" level=info msg="RemovePodSandbox \"54a61a32678590b691211cd51c6ad1a906f74fbde00b221b6fbefe7ac37b5019\" returns successfully" Mar 12 01:38:22.690065 containerd[1595]: time="2026-03-12T01:38:22.690013426Z" level=info msg="StopPodSandbox for \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\"" Mar 12 01:38:22.714974 systemd[1]: Started sshd@10-10.0.0.145:22-10.0.0.1:54588.service - OpenSSH per-connection server daemon (10.0.0.1:54588). Mar 12 01:38:22.796830 sshd[5678]: Accepted publickey for core from 10.0.0.1 port 54588 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:22.799728 sshd[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:22.802107 containerd[1595]: 2026-03-12 01:38:22.738 [WARNING][5672] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0da1befa-e568-43f6-8333-a51d79629123", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637", Pod:"coredns-674b8bbfcf-qhnmx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia53e439d173", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:22.802107 containerd[1595]: 2026-03-12 01:38:22.738 [INFO][5672] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Mar 12 01:38:22.802107 containerd[1595]: 2026-03-12 01:38:22.738 [INFO][5672] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" iface="eth0" netns="" Mar 12 01:38:22.802107 containerd[1595]: 2026-03-12 01:38:22.738 [INFO][5672] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Mar 12 01:38:22.802107 containerd[1595]: 2026-03-12 01:38:22.738 [INFO][5672] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Mar 12 01:38:22.802107 containerd[1595]: 2026-03-12 01:38:22.781 [INFO][5683] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" HandleID="k8s-pod-network.1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Workload="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:38:22.802107 containerd[1595]: 2026-03-12 01:38:22.781 [INFO][5683] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:22.802107 containerd[1595]: 2026-03-12 01:38:22.781 [INFO][5683] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:22.802107 containerd[1595]: 2026-03-12 01:38:22.791 [WARNING][5683] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" HandleID="k8s-pod-network.1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Workload="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:38:22.802107 containerd[1595]: 2026-03-12 01:38:22.791 [INFO][5683] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" HandleID="k8s-pod-network.1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Workload="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:38:22.802107 containerd[1595]: 2026-03-12 01:38:22.793 [INFO][5683] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:22.802107 containerd[1595]: 2026-03-12 01:38:22.798 [INFO][5672] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Mar 12 01:38:22.802839 containerd[1595]: time="2026-03-12T01:38:22.802148154Z" level=info msg="TearDown network for sandbox \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\" successfully" Mar 12 01:38:22.802839 containerd[1595]: time="2026-03-12T01:38:22.802180654Z" level=info msg="StopPodSandbox for \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\" returns successfully" Mar 12 01:38:22.803351 containerd[1595]: time="2026-03-12T01:38:22.803241604Z" level=info msg="RemovePodSandbox for \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\"" Mar 12 01:38:22.803351 containerd[1595]: time="2026-03-12T01:38:22.803269656Z" level=info msg="Forcibly stopping sandbox \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\"" Mar 12 01:38:22.806084 systemd-logind[1574]: New session 11 of user core. Mar 12 01:38:22.811030 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 01:38:22.928818 containerd[1595]: 2026-03-12 01:38:22.870 [WARNING][5702] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0da1befa-e568-43f6-8333-a51d79629123", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"815738a3b7b7560cd9282f307fef08d03c238d2aa038d052fa20593fe843e637", Pod:"coredns-674b8bbfcf-qhnmx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia53e439d173", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:22.928818 containerd[1595]: 2026-03-12 01:38:22.870 [INFO][5702] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Mar 12 01:38:22.928818 containerd[1595]: 2026-03-12 01:38:22.870 [INFO][5702] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" iface="eth0" netns="" Mar 12 01:38:22.928818 containerd[1595]: 2026-03-12 01:38:22.870 [INFO][5702] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Mar 12 01:38:22.928818 containerd[1595]: 2026-03-12 01:38:22.870 [INFO][5702] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Mar 12 01:38:22.928818 containerd[1595]: 2026-03-12 01:38:22.905 [INFO][5713] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" HandleID="k8s-pod-network.1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Workload="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:38:22.928818 containerd[1595]: 2026-03-12 01:38:22.905 [INFO][5713] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:22.928818 containerd[1595]: 2026-03-12 01:38:22.905 [INFO][5713] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:22.928818 containerd[1595]: 2026-03-12 01:38:22.916 [WARNING][5713] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" HandleID="k8s-pod-network.1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Workload="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:38:22.928818 containerd[1595]: 2026-03-12 01:38:22.917 [INFO][5713] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" HandleID="k8s-pod-network.1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Workload="localhost-k8s-coredns--674b8bbfcf--qhnmx-eth0" Mar 12 01:38:22.928818 containerd[1595]: 2026-03-12 01:38:22.919 [INFO][5713] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:22.928818 containerd[1595]: 2026-03-12 01:38:22.924 [INFO][5702] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708" Mar 12 01:38:22.928818 containerd[1595]: time="2026-03-12T01:38:22.927798266Z" level=info msg="TearDown network for sandbox \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\" successfully" Mar 12 01:38:22.940782 containerd[1595]: time="2026-03-12T01:38:22.940598333Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:38:22.940863 containerd[1595]: time="2026-03-12T01:38:22.940829475Z" level=info msg="RemovePodSandbox \"1502714fdfcba3fb41560cb466aa63722e3376d102f1b749c136a16d1badb708\" returns successfully" Mar 12 01:38:22.941902 containerd[1595]: time="2026-03-12T01:38:22.941855870Z" level=info msg="StopPodSandbox for \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\"" Mar 12 01:38:23.002977 sshd[5678]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:23.016098 systemd[1]: Started sshd@11-10.0.0.145:22-10.0.0.1:54602.service - OpenSSH per-connection server daemon (10.0.0.1:54602). Mar 12 01:38:23.016968 systemd[1]: sshd@10-10.0.0.145:22-10.0.0.1:54588.service: Deactivated successfully. Mar 12 01:38:23.025044 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 01:38:23.028223 systemd-logind[1574]: Session 11 logged out. Waiting for processes to exit. Mar 12 01:38:23.030145 systemd-logind[1574]: Removed session 11. Mar 12 01:38:23.043415 containerd[1595]: 2026-03-12 01:38:22.991 [WARNING][5740] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--cd8ml-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"a12164ec-a3c1-4b91-bb08-d78e4edbc1ad", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f", Pod:"goldmane-5b85766d88-cd8ml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali64d5ee35c22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:23.043415 containerd[1595]: 2026-03-12 01:38:22.992 [INFO][5740] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Mar 12 01:38:23.043415 containerd[1595]: 2026-03-12 01:38:22.992 [INFO][5740] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" iface="eth0" netns="" Mar 12 01:38:23.043415 containerd[1595]: 2026-03-12 01:38:22.992 [INFO][5740] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Mar 12 01:38:23.043415 containerd[1595]: 2026-03-12 01:38:22.992 [INFO][5740] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Mar 12 01:38:23.043415 containerd[1595]: 2026-03-12 01:38:23.024 [INFO][5749] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" HandleID="k8s-pod-network.431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Workload="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:38:23.043415 containerd[1595]: 2026-03-12 01:38:23.024 [INFO][5749] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:23.043415 containerd[1595]: 2026-03-12 01:38:23.024 [INFO][5749] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:23.043415 containerd[1595]: 2026-03-12 01:38:23.035 [WARNING][5749] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" HandleID="k8s-pod-network.431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Workload="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:38:23.043415 containerd[1595]: 2026-03-12 01:38:23.035 [INFO][5749] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" HandleID="k8s-pod-network.431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Workload="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:38:23.043415 containerd[1595]: 2026-03-12 01:38:23.037 [INFO][5749] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:23.043415 containerd[1595]: 2026-03-12 01:38:23.040 [INFO][5740] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Mar 12 01:38:23.044052 containerd[1595]: time="2026-03-12T01:38:23.043484873Z" level=info msg="TearDown network for sandbox \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\" successfully" Mar 12 01:38:23.044052 containerd[1595]: time="2026-03-12T01:38:23.043508677Z" level=info msg="StopPodSandbox for \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\" returns successfully" Mar 12 01:38:23.044439 containerd[1595]: time="2026-03-12T01:38:23.044198595Z" level=info msg="RemovePodSandbox for \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\"" Mar 12 01:38:23.044439 containerd[1595]: time="2026-03-12T01:38:23.044247406Z" level=info msg="Forcibly stopping sandbox \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\"" Mar 12 01:38:23.071389 sshd[5755]: Accepted publickey for core from 10.0.0.1 port 54602 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:23.073362 sshd[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:23.080823 systemd-logind[1574]: New session 12 of user core. Mar 12 01:38:23.089831 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 01:38:23.136706 containerd[1595]: 2026-03-12 01:38:23.089 [WARNING][5771] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--cd8ml-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"a12164ec-a3c1-4b91-bb08-d78e4edbc1ad", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"afa5cd5eec551507a0709773f2a5e374adca62aee50d5616cbf8ff4611af448f", Pod:"goldmane-5b85766d88-cd8ml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali64d5ee35c22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:23.136706 containerd[1595]: 2026-03-12 01:38:23.089 [INFO][5771] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Mar 12 01:38:23.136706 containerd[1595]: 2026-03-12 01:38:23.089 [INFO][5771] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" iface="eth0" netns="" Mar 12 01:38:23.136706 containerd[1595]: 2026-03-12 01:38:23.089 [INFO][5771] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Mar 12 01:38:23.136706 containerd[1595]: 2026-03-12 01:38:23.089 [INFO][5771] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Mar 12 01:38:23.136706 containerd[1595]: 2026-03-12 01:38:23.120 [INFO][5780] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" HandleID="k8s-pod-network.431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Workload="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:38:23.136706 containerd[1595]: 2026-03-12 01:38:23.120 [INFO][5780] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:23.136706 containerd[1595]: 2026-03-12 01:38:23.120 [INFO][5780] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:23.136706 containerd[1595]: 2026-03-12 01:38:23.127 [WARNING][5780] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" HandleID="k8s-pod-network.431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Workload="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:38:23.136706 containerd[1595]: 2026-03-12 01:38:23.128 [INFO][5780] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" HandleID="k8s-pod-network.431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Workload="localhost-k8s-goldmane--5b85766d88--cd8ml-eth0" Mar 12 01:38:23.136706 containerd[1595]: 2026-03-12 01:38:23.130 [INFO][5780] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:23.136706 containerd[1595]: 2026-03-12 01:38:23.133 [INFO][5771] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71" Mar 12 01:38:23.136706 containerd[1595]: time="2026-03-12T01:38:23.136082094Z" level=info msg="TearDown network for sandbox \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\" successfully" Mar 12 01:38:23.140563 containerd[1595]: time="2026-03-12T01:38:23.140469769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:38:23.140563 containerd[1595]: time="2026-03-12T01:38:23.140545980Z" level=info msg="RemovePodSandbox \"431271b3310308fde44be5244bde046b41e6f3c897f378d9dcf06072263a9a71\" returns successfully" Mar 12 01:38:23.141466 containerd[1595]: time="2026-03-12T01:38:23.141308329Z" level=info msg="StopPodSandbox for \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\"" Mar 12 01:38:23.245201 containerd[1595]: 2026-03-12 01:38:23.196 [WARNING][5803] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0", GenerateName:"calico-apiserver-d5bdf55d9-", Namespace:"calico-system", SelfLink:"", UID:"7120930c-3a55-44b0-911f-6bef14f82bc4", ResourceVersion:"1251", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bdf55d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb", Pod:"calico-apiserver-d5bdf55d9-hgfnq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib3caf480b1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:23.245201 containerd[1595]: 2026-03-12 01:38:23.196 [INFO][5803] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Mar 12 01:38:23.245201 containerd[1595]: 2026-03-12 01:38:23.196 [INFO][5803] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" iface="eth0" netns="" Mar 12 01:38:23.245201 containerd[1595]: 2026-03-12 01:38:23.196 [INFO][5803] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Mar 12 01:38:23.245201 containerd[1595]: 2026-03-12 01:38:23.197 [INFO][5803] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Mar 12 01:38:23.245201 containerd[1595]: 2026-03-12 01:38:23.229 [INFO][5812] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" HandleID="k8s-pod-network.a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:38:23.245201 containerd[1595]: 2026-03-12 01:38:23.229 [INFO][5812] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:23.245201 containerd[1595]: 2026-03-12 01:38:23.229 [INFO][5812] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:23.245201 containerd[1595]: 2026-03-12 01:38:23.237 [WARNING][5812] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" HandleID="k8s-pod-network.a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:38:23.245201 containerd[1595]: 2026-03-12 01:38:23.237 [INFO][5812] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" HandleID="k8s-pod-network.a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:38:23.245201 containerd[1595]: 2026-03-12 01:38:23.239 [INFO][5812] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:23.245201 containerd[1595]: 2026-03-12 01:38:23.242 [INFO][5803] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Mar 12 01:38:23.245201 containerd[1595]: time="2026-03-12T01:38:23.245177994Z" level=info msg="TearDown network for sandbox \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\" successfully" Mar 12 01:38:23.245201 containerd[1595]: time="2026-03-12T01:38:23.245200756Z" level=info msg="StopPodSandbox for \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\" returns successfully" Mar 12 01:38:23.246805 containerd[1595]: time="2026-03-12T01:38:23.245779105Z" level=info msg="RemovePodSandbox for \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\"" Mar 12 01:38:23.246805 containerd[1595]: time="2026-03-12T01:38:23.245803671Z" level=info msg="Forcibly stopping sandbox \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\"" Mar 12 01:38:23.326387 sshd[5755]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:23.336169 systemd[1]: Started sshd@12-10.0.0.145:22-10.0.0.1:54606.service - OpenSSH per-connection server daemon (10.0.0.1:54606). Mar 12 01:38:23.337591 systemd[1]: sshd@11-10.0.0.145:22-10.0.0.1:54602.service: Deactivated successfully. Mar 12 01:38:23.353316 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 01:38:23.359694 systemd-logind[1574]: Session 12 logged out. Waiting for processes to exit. Mar 12 01:38:23.369316 systemd-logind[1574]: Removed session 12. Mar 12 01:38:23.381771 containerd[1595]: 2026-03-12 01:38:23.297 [WARNING][5829] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0", GenerateName:"calico-apiserver-d5bdf55d9-", Namespace:"calico-system", SelfLink:"", UID:"7120930c-3a55-44b0-911f-6bef14f82bc4", ResourceVersion:"1251", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bdf55d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03777379304f64d3259c36303fd1bb877de63b08af7833a07253ec06f3508ceb", Pod:"calico-apiserver-d5bdf55d9-hgfnq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib3caf480b1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:23.381771 containerd[1595]: 2026-03-12 01:38:23.297 [INFO][5829] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Mar 12 01:38:23.381771 containerd[1595]: 2026-03-12 01:38:23.297 [INFO][5829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" iface="eth0" netns="" Mar 12 01:38:23.381771 containerd[1595]: 2026-03-12 01:38:23.297 [INFO][5829] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Mar 12 01:38:23.381771 containerd[1595]: 2026-03-12 01:38:23.297 [INFO][5829] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Mar 12 01:38:23.381771 containerd[1595]: 2026-03-12 01:38:23.335 [INFO][5838] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" HandleID="k8s-pod-network.a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:38:23.381771 containerd[1595]: 2026-03-12 01:38:23.335 [INFO][5838] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:23.381771 containerd[1595]: 2026-03-12 01:38:23.336 [INFO][5838] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:23.381771 containerd[1595]: 2026-03-12 01:38:23.358 [WARNING][5838] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" HandleID="k8s-pod-network.a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:38:23.381771 containerd[1595]: 2026-03-12 01:38:23.358 [INFO][5838] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" HandleID="k8s-pod-network.a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--hgfnq-eth0" Mar 12 01:38:23.381771 containerd[1595]: 2026-03-12 01:38:23.365 [INFO][5838] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:23.381771 containerd[1595]: 2026-03-12 01:38:23.377 [INFO][5829] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e" Mar 12 01:38:23.382597 containerd[1595]: time="2026-03-12T01:38:23.381818526Z" level=info msg="TearDown network for sandbox \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\" successfully" Mar 12 01:38:23.389023 containerd[1595]: time="2026-03-12T01:38:23.388986638Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:38:23.389230 containerd[1595]: time="2026-03-12T01:38:23.389165412Z" level=info msg="RemovePodSandbox \"a25c08cc6562a9f6fdaf22116c7953b74bf527067f10f521f05b5d8cb2e3847e\" returns successfully" Mar 12 01:38:23.390127 containerd[1595]: time="2026-03-12T01:38:23.390069729Z" level=info msg="StopPodSandbox for \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\"" Mar 12 01:38:23.400830 sshd[5844]: Accepted publickey for core from 10.0.0.1 port 54606 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:23.403383 sshd[5844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:23.412837 systemd-logind[1574]: New session 13 of user core. Mar 12 01:38:23.420098 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 01:38:23.513478 containerd[1595]: 2026-03-12 01:38:23.451 [WARNING][5860] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0", GenerateName:"calico-apiserver-d5bdf55d9-", Namespace:"calico-system", SelfLink:"", UID:"8827ea6d-6039-4f86-96be-28f12dc97ece", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bdf55d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc", Pod:"calico-apiserver-d5bdf55d9-zlq7p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0a03cf32e34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:23.513478 containerd[1595]: 2026-03-12 01:38:23.451 [INFO][5860] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Mar 12 01:38:23.513478 containerd[1595]: 2026-03-12 01:38:23.451 [INFO][5860] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" iface="eth0" netns="" Mar 12 01:38:23.513478 containerd[1595]: 2026-03-12 01:38:23.451 [INFO][5860] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Mar 12 01:38:23.513478 containerd[1595]: 2026-03-12 01:38:23.451 [INFO][5860] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Mar 12 01:38:23.513478 containerd[1595]: 2026-03-12 01:38:23.494 [INFO][5871] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" HandleID="k8s-pod-network.09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:38:23.513478 containerd[1595]: 2026-03-12 01:38:23.495 [INFO][5871] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:23.513478 containerd[1595]: 2026-03-12 01:38:23.495 [INFO][5871] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:23.513478 containerd[1595]: 2026-03-12 01:38:23.503 [WARNING][5871] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" HandleID="k8s-pod-network.09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:38:23.513478 containerd[1595]: 2026-03-12 01:38:23.503 [INFO][5871] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" HandleID="k8s-pod-network.09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:38:23.513478 containerd[1595]: 2026-03-12 01:38:23.506 [INFO][5871] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:23.513478 containerd[1595]: 2026-03-12 01:38:23.509 [INFO][5860] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Mar 12 01:38:23.514061 containerd[1595]: time="2026-03-12T01:38:23.513580709Z" level=info msg="TearDown network for sandbox \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\" successfully" Mar 12 01:38:23.514061 containerd[1595]: time="2026-03-12T01:38:23.513612568Z" level=info msg="StopPodSandbox for \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\" returns successfully" Mar 12 01:38:23.514828 containerd[1595]: time="2026-03-12T01:38:23.514785569Z" level=info msg="RemovePodSandbox for \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\"" Mar 12 01:38:23.514828 containerd[1595]: time="2026-03-12T01:38:23.514827648Z" level=info msg="Forcibly stopping sandbox \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\"" Mar 12 01:38:23.586579 sshd[5844]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:23.591251 systemd[1]: sshd@12-10.0.0.145:22-10.0.0.1:54606.service: Deactivated successfully. Mar 12 01:38:23.596563 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 01:38:23.598354 systemd-logind[1574]: Session 13 logged out. Waiting for processes to exit. Mar 12 01:38:23.600375 systemd-logind[1574]: Removed session 13. Mar 12 01:38:23.617063 containerd[1595]: 2026-03-12 01:38:23.568 [WARNING][5896] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0", GenerateName:"calico-apiserver-d5bdf55d9-", Namespace:"calico-system", SelfLink:"", UID:"8827ea6d-6039-4f86-96be-28f12dc97ece", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d5bdf55d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6881f4553b64b36d040d4ad8f6cc79c405b6396ff144e8013dce134e590f1dc", Pod:"calico-apiserver-d5bdf55d9-zlq7p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0a03cf32e34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:23.617063 containerd[1595]: 2026-03-12 01:38:23.568 [INFO][5896] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Mar 12 01:38:23.617063 containerd[1595]: 2026-03-12 01:38:23.568 [INFO][5896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" iface="eth0" netns="" Mar 12 01:38:23.617063 containerd[1595]: 2026-03-12 01:38:23.568 [INFO][5896] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Mar 12 01:38:23.617063 containerd[1595]: 2026-03-12 01:38:23.568 [INFO][5896] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Mar 12 01:38:23.617063 containerd[1595]: 2026-03-12 01:38:23.598 [INFO][5905] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" HandleID="k8s-pod-network.09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:38:23.617063 containerd[1595]: 2026-03-12 01:38:23.598 [INFO][5905] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:23.617063 containerd[1595]: 2026-03-12 01:38:23.598 [INFO][5905] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:23.617063 containerd[1595]: 2026-03-12 01:38:23.609 [WARNING][5905] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" HandleID="k8s-pod-network.09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:38:23.617063 containerd[1595]: 2026-03-12 01:38:23.609 [INFO][5905] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" HandleID="k8s-pod-network.09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Workload="localhost-k8s-calico--apiserver--d5bdf55d9--zlq7p-eth0" Mar 12 01:38:23.617063 containerd[1595]: 2026-03-12 01:38:23.611 [INFO][5905] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:23.617063 containerd[1595]: 2026-03-12 01:38:23.614 [INFO][5896] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367" Mar 12 01:38:23.617464 containerd[1595]: time="2026-03-12T01:38:23.617090359Z" level=info msg="TearDown network for sandbox \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\" successfully" Mar 12 01:38:23.623691 containerd[1595]: time="2026-03-12T01:38:23.623585106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:38:23.623851 containerd[1595]: time="2026-03-12T01:38:23.623720268Z" level=info msg="RemovePodSandbox \"09cede24ff002c887b0e20d53c4f520ca60336503614997b34a18963583e3367\" returns successfully" Mar 12 01:38:23.624462 containerd[1595]: time="2026-03-12T01:38:23.624372023Z" level=info msg="StopPodSandbox for \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\"" Mar 12 01:38:23.733857 containerd[1595]: 2026-03-12 01:38:23.679 [WARNING][5926] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" WorkloadEndpoint="localhost-k8s-whisker--75b746db9f--2kjgw-eth0" Mar 12 01:38:23.733857 containerd[1595]: 2026-03-12 01:38:23.679 [INFO][5926] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Mar 12 01:38:23.733857 containerd[1595]: 2026-03-12 01:38:23.679 [INFO][5926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" iface="eth0" netns="" Mar 12 01:38:23.733857 containerd[1595]: 2026-03-12 01:38:23.679 [INFO][5926] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Mar 12 01:38:23.733857 containerd[1595]: 2026-03-12 01:38:23.679 [INFO][5926] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Mar 12 01:38:23.733857 containerd[1595]: 2026-03-12 01:38:23.715 [INFO][5934] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" HandleID="k8s-pod-network.b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Workload="localhost-k8s-whisker--75b746db9f--2kjgw-eth0" Mar 12 01:38:23.733857 containerd[1595]: 2026-03-12 01:38:23.715 [INFO][5934] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:23.733857 containerd[1595]: 2026-03-12 01:38:23.715 [INFO][5934] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:23.733857 containerd[1595]: 2026-03-12 01:38:23.724 [WARNING][5934] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" HandleID="k8s-pod-network.b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Workload="localhost-k8s-whisker--75b746db9f--2kjgw-eth0" Mar 12 01:38:23.733857 containerd[1595]: 2026-03-12 01:38:23.724 [INFO][5934] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" HandleID="k8s-pod-network.b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Workload="localhost-k8s-whisker--75b746db9f--2kjgw-eth0" Mar 12 01:38:23.733857 containerd[1595]: 2026-03-12 01:38:23.726 [INFO][5934] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:23.733857 containerd[1595]: 2026-03-12 01:38:23.729 [INFO][5926] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Mar 12 01:38:23.733857 containerd[1595]: time="2026-03-12T01:38:23.733895763Z" level=info msg="TearDown network for sandbox \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\" successfully" Mar 12 01:38:23.733857 containerd[1595]: time="2026-03-12T01:38:23.733928293Z" level=info msg="StopPodSandbox for \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\" returns successfully" Mar 12 01:38:23.734810 containerd[1595]: time="2026-03-12T01:38:23.734758236Z" level=info msg="RemovePodSandbox for \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\"" Mar 12 01:38:23.734916 containerd[1595]: time="2026-03-12T01:38:23.734815603Z" level=info msg="Forcibly stopping sandbox \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\"" Mar 12 01:38:23.847336 containerd[1595]: 2026-03-12 01:38:23.788 [WARNING][5950] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" WorkloadEndpoint="localhost-k8s-whisker--75b746db9f--2kjgw-eth0" Mar 12 01:38:23.847336 containerd[1595]: 2026-03-12 01:38:23.789 [INFO][5950] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Mar 12 01:38:23.847336 containerd[1595]: 2026-03-12 01:38:23.789 [INFO][5950] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" iface="eth0" netns="" Mar 12 01:38:23.847336 containerd[1595]: 2026-03-12 01:38:23.789 [INFO][5950] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Mar 12 01:38:23.847336 containerd[1595]: 2026-03-12 01:38:23.789 [INFO][5950] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Mar 12 01:38:23.847336 containerd[1595]: 2026-03-12 01:38:23.827 [INFO][5959] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" HandleID="k8s-pod-network.b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Workload="localhost-k8s-whisker--75b746db9f--2kjgw-eth0" Mar 12 01:38:23.847336 containerd[1595]: 2026-03-12 01:38:23.828 [INFO][5959] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:23.847336 containerd[1595]: 2026-03-12 01:38:23.828 [INFO][5959] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:23.847336 containerd[1595]: 2026-03-12 01:38:23.835 [WARNING][5959] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" HandleID="k8s-pod-network.b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Workload="localhost-k8s-whisker--75b746db9f--2kjgw-eth0" Mar 12 01:38:23.847336 containerd[1595]: 2026-03-12 01:38:23.835 [INFO][5959] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" HandleID="k8s-pod-network.b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Workload="localhost-k8s-whisker--75b746db9f--2kjgw-eth0" Mar 12 01:38:23.847336 containerd[1595]: 2026-03-12 01:38:23.837 [INFO][5959] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:23.847336 containerd[1595]: 2026-03-12 01:38:23.841 [INFO][5950] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3" Mar 12 01:38:23.847336 containerd[1595]: time="2026-03-12T01:38:23.844349897Z" level=info msg="TearDown network for sandbox \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\" successfully" Mar 12 01:38:23.850371 containerd[1595]: time="2026-03-12T01:38:23.850297278Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:38:23.850441 containerd[1595]: time="2026-03-12T01:38:23.850387726Z" level=info msg="RemovePodSandbox \"b9985ffed56d71d590dea0511c336d1f6935e84252655d55fb089f135f2e8ea3\" returns successfully" Mar 12 01:38:23.851354 containerd[1595]: time="2026-03-12T01:38:23.851127897Z" level=info msg="StopPodSandbox for \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\"" Mar 12 01:38:23.949265 containerd[1595]: 2026-03-12 01:38:23.903 [WARNING][5976] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0", GenerateName:"calico-kube-controllers-6c984f8d9-", Namespace:"calico-system", SelfLink:"", UID:"5fb257b0-27b4-4ccb-bab4-86fe3218bc99", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c984f8d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407", Pod:"calico-kube-controllers-6c984f8d9-nrkt8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali410349c6c43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:23.949265 containerd[1595]: 2026-03-12 01:38:23.903 [INFO][5976] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Mar 12 01:38:23.949265 containerd[1595]: 2026-03-12 01:38:23.903 [INFO][5976] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" iface="eth0" netns="" Mar 12 01:38:23.949265 containerd[1595]: 2026-03-12 01:38:23.903 [INFO][5976] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Mar 12 01:38:23.949265 containerd[1595]: 2026-03-12 01:38:23.903 [INFO][5976] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Mar 12 01:38:23.949265 containerd[1595]: 2026-03-12 01:38:23.933 [INFO][5984] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" HandleID="k8s-pod-network.15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Workload="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:38:23.949265 containerd[1595]: 2026-03-12 01:38:23.933 [INFO][5984] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:23.949265 containerd[1595]: 2026-03-12 01:38:23.933 [INFO][5984] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:23.949265 containerd[1595]: 2026-03-12 01:38:23.940 [WARNING][5984] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" HandleID="k8s-pod-network.15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Workload="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:38:23.949265 containerd[1595]: 2026-03-12 01:38:23.940 [INFO][5984] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" HandleID="k8s-pod-network.15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Workload="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:38:23.949265 containerd[1595]: 2026-03-12 01:38:23.942 [INFO][5984] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:23.949265 containerd[1595]: 2026-03-12 01:38:23.945 [INFO][5976] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Mar 12 01:38:23.950010 containerd[1595]: time="2026-03-12T01:38:23.949288007Z" level=info msg="TearDown network for sandbox \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\" successfully" Mar 12 01:38:23.950010 containerd[1595]: time="2026-03-12T01:38:23.949318664Z" level=info msg="StopPodSandbox for \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\" returns successfully" Mar 12 01:38:23.950599 containerd[1595]: time="2026-03-12T01:38:23.950174040Z" level=info msg="RemovePodSandbox for \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\"" Mar 12 01:38:23.950599 containerd[1595]: time="2026-03-12T01:38:23.950208204Z" level=info msg="Forcibly stopping sandbox \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\"" Mar 12 01:38:24.050933 containerd[1595]: 2026-03-12 01:38:24.004 [WARNING][6001] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0", GenerateName:"calico-kube-controllers-6c984f8d9-", Namespace:"calico-system", SelfLink:"", UID:"5fb257b0-27b4-4ccb-bab4-86fe3218bc99", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c984f8d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e8d638f2c78d492a1abc864504f7bd55e485063d146eeae251595847c926d407", Pod:"calico-kube-controllers-6c984f8d9-nrkt8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali410349c6c43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:24.050933 containerd[1595]: 2026-03-12 01:38:24.004 [INFO][6001] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Mar 12 01:38:24.050933 containerd[1595]: 2026-03-12 01:38:24.004 [INFO][6001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" iface="eth0" netns="" Mar 12 01:38:24.050933 containerd[1595]: 2026-03-12 01:38:24.004 [INFO][6001] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Mar 12 01:38:24.050933 containerd[1595]: 2026-03-12 01:38:24.004 [INFO][6001] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Mar 12 01:38:24.050933 containerd[1595]: 2026-03-12 01:38:24.035 [INFO][6009] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" HandleID="k8s-pod-network.15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Workload="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:38:24.050933 containerd[1595]: 2026-03-12 01:38:24.035 [INFO][6009] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:24.050933 containerd[1595]: 2026-03-12 01:38:24.035 [INFO][6009] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:24.050933 containerd[1595]: 2026-03-12 01:38:24.042 [WARNING][6009] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" HandleID="k8s-pod-network.15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Workload="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:38:24.050933 containerd[1595]: 2026-03-12 01:38:24.042 [INFO][6009] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" HandleID="k8s-pod-network.15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Workload="localhost-k8s-calico--kube--controllers--6c984f8d9--nrkt8-eth0" Mar 12 01:38:24.050933 containerd[1595]: 2026-03-12 01:38:24.044 [INFO][6009] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:24.050933 containerd[1595]: 2026-03-12 01:38:24.047 [INFO][6001] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97" Mar 12 01:38:24.051473 containerd[1595]: time="2026-03-12T01:38:24.050925041Z" level=info msg="TearDown network for sandbox \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\" successfully" Mar 12 01:38:24.056243 containerd[1595]: time="2026-03-12T01:38:24.056183491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:38:24.056436 containerd[1595]: time="2026-03-12T01:38:24.056358658Z" level=info msg="RemovePodSandbox \"15ee98230783031b7d6bbfdb3cd4943d9ba25747d016fac25aa8c52728845e97\" returns successfully" Mar 12 01:38:28.608056 systemd[1]: Started sshd@13-10.0.0.145:22-10.0.0.1:54620.service - OpenSSH per-connection server daemon (10.0.0.1:54620). Mar 12 01:38:28.649915 sshd[6039]: Accepted publickey for core from 10.0.0.1 port 54620 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:28.652218 sshd[6039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:28.658317 systemd-logind[1574]: New session 14 of user core. Mar 12 01:38:28.668132 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 01:38:28.807000 sshd[6039]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:28.818027 systemd[1]: Started sshd@14-10.0.0.145:22-10.0.0.1:54626.service - OpenSSH per-connection server daemon (10.0.0.1:54626). Mar 12 01:38:28.819311 systemd[1]: sshd@13-10.0.0.145:22-10.0.0.1:54620.service: Deactivated successfully. Mar 12 01:38:28.825328 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 01:38:28.826523 systemd-logind[1574]: Session 14 logged out. Waiting for processes to exit. Mar 12 01:38:28.828304 systemd-logind[1574]: Removed session 14. Mar 12 01:38:28.883085 sshd[6052]: Accepted publickey for core from 10.0.0.1 port 54626 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:28.932127 sshd[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:28.965846 systemd-logind[1574]: New session 15 of user core. Mar 12 01:38:28.980682 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 01:38:29.367444 kubelet[2697]: E0312 01:38:29.366538 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:29.481069 sshd[6052]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:29.490084 systemd[1]: Started sshd@15-10.0.0.145:22-10.0.0.1:54632.service - OpenSSH per-connection server daemon (10.0.0.1:54632). Mar 12 01:38:29.490959 systemd[1]: sshd@14-10.0.0.145:22-10.0.0.1:54626.service: Deactivated successfully. Mar 12 01:38:29.493977 systemd-logind[1574]: Session 15 logged out. Waiting for processes to exit. Mar 12 01:38:29.498011 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 01:38:29.500318 systemd-logind[1574]: Removed session 15. Mar 12 01:38:29.541541 sshd[6066]: Accepted publickey for core from 10.0.0.1 port 54632 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:29.543203 sshd[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:29.549615 systemd-logind[1574]: New session 16 of user core. Mar 12 01:38:29.562197 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 01:38:30.218979 sshd[6066]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:30.226525 systemd[1]: Started sshd@16-10.0.0.145:22-10.0.0.1:54634.service - OpenSSH per-connection server daemon (10.0.0.1:54634). Mar 12 01:38:30.241438 systemd[1]: sshd@15-10.0.0.145:22-10.0.0.1:54632.service: Deactivated successfully. Mar 12 01:38:30.246339 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 01:38:30.249009 systemd-logind[1574]: Session 16 logged out. Waiting for processes to exit. Mar 12 01:38:30.250893 systemd-logind[1574]: Removed session 16. Mar 12 01:38:30.294376 sshd[6093]: Accepted publickey for core from 10.0.0.1 port 54634 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:30.297130 sshd[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:30.302700 systemd-logind[1574]: New session 17 of user core. Mar 12 01:38:30.309089 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 01:38:30.607910 sshd[6093]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:30.622113 systemd[1]: Started sshd@17-10.0.0.145:22-10.0.0.1:54636.service - OpenSSH per-connection server daemon (10.0.0.1:54636). Mar 12 01:38:30.627058 systemd[1]: sshd@16-10.0.0.145:22-10.0.0.1:54634.service: Deactivated successfully. Mar 12 01:38:30.641296 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 01:38:30.650042 systemd-logind[1574]: Session 17 logged out. Waiting for processes to exit. Mar 12 01:38:30.662773 systemd-logind[1574]: Removed session 17. Mar 12 01:38:30.700535 sshd[6110]: Accepted publickey for core from 10.0.0.1 port 54636 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:30.702421 sshd[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:30.708215 systemd-logind[1574]: New session 18 of user core. Mar 12 01:38:30.715093 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 01:38:30.871957 sshd[6110]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:30.876674 systemd[1]: sshd@17-10.0.0.145:22-10.0.0.1:54636.service: Deactivated successfully. Mar 12 01:38:30.879875 systemd-logind[1574]: Session 18 logged out. Waiting for processes to exit. Mar 12 01:38:30.880189 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 01:38:30.882395 systemd-logind[1574]: Removed session 18. Mar 12 01:38:35.890184 systemd[1]: Started sshd@18-10.0.0.145:22-10.0.0.1:34304.service - OpenSSH per-connection server daemon (10.0.0.1:34304). Mar 12 01:38:35.953062 sshd[6193]: Accepted publickey for core from 10.0.0.1 port 34304 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:35.956559 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:35.962674 systemd-logind[1574]: New session 19 of user core. Mar 12 01:38:35.972265 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 01:38:36.148171 sshd[6193]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:36.153755 systemd[1]: sshd@18-10.0.0.145:22-10.0.0.1:34304.service: Deactivated successfully. Mar 12 01:38:36.156117 systemd-logind[1574]: Session 19 logged out. Waiting for processes to exit. Mar 12 01:38:36.156173 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 01:38:36.157961 systemd-logind[1574]: Removed session 19. Mar 12 01:38:36.369903 kubelet[2697]: E0312 01:38:36.369825 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:41.157948 systemd[1]: Started sshd@19-10.0.0.145:22-10.0.0.1:34320.service - OpenSSH per-connection server daemon (10.0.0.1:34320). Mar 12 01:38:41.210700 sshd[6228]: Accepted publickey for core from 10.0.0.1 port 34320 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:41.213126 sshd[6228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:41.218203 systemd-logind[1574]: New session 20 of user core. Mar 12 01:38:41.232175 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 01:38:41.549371 sshd[6228]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:41.554103 systemd[1]: sshd@19-10.0.0.145:22-10.0.0.1:34320.service: Deactivated successfully. Mar 12 01:38:41.557443 systemd-logind[1574]: Session 20 logged out. Waiting for processes to exit. Mar 12 01:38:41.557453 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 01:38:41.559997 systemd-logind[1574]: Removed session 20. Mar 12 01:38:45.366437 kubelet[2697]: E0312 01:38:45.366365 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:46.565900 systemd[1]: Started sshd@20-10.0.0.145:22-10.0.0.1:51446.service - OpenSSH per-connection server daemon (10.0.0.1:51446). Mar 12 01:38:46.600880 sshd[6243]: Accepted publickey for core from 10.0.0.1 port 51446 ssh2: RSA SHA256:wDPRTEstPbTKCdTT2qUtCtXrMhNyTtgk56EQ+eNGih8 Mar 12 01:38:46.602539 sshd[6243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:46.607107 systemd-logind[1574]: New session 21 of user core. Mar 12 01:38:46.614987 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 01:38:46.746210 sshd[6243]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:46.750330 systemd[1]: sshd@20-10.0.0.145:22-10.0.0.1:51446.service: Deactivated successfully. Mar 12 01:38:46.753010 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 01:38:46.753085 systemd-logind[1574]: Session 21 logged out. Waiting for processes to exit. Mar 12 01:38:46.754574 systemd-logind[1574]: Removed session 21.