Apr 16 01:19:04.821141 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:45:03 -00 2026 Apr 16 01:19:04.821164 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 01:19:04.821174 kernel: BIOS-provided physical RAM map: Apr 16 01:19:04.821179 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 16 01:19:04.821185 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 16 01:19:04.821190 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 16 01:19:04.821196 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 16 01:19:04.821202 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 16 01:19:04.821207 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 16 01:19:04.821212 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 16 01:19:04.821219 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 16 01:19:04.821223 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 16 01:19:04.821330 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 16 01:19:04.821335 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 16 01:19:04.821436 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 16 01:19:04.821443 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 16 01:19:04.821449 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 16 01:19:04.821454 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 16 01:19:04.821459 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 16 01:19:04.821463 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 16 01:19:04.821468 kernel: NX (Execute Disable) protection: active Apr 16 01:19:04.821472 kernel: APIC: Static calls initialized Apr 16 01:19:04.821477 kernel: efi: EFI v2.7 by EDK II Apr 16 01:19:04.821482 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Apr 16 01:19:04.821487 kernel: SMBIOS 2.8 present. Apr 16 01:19:04.821491 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 16 01:19:04.821496 kernel: Hypervisor detected: KVM Apr 16 01:19:04.821502 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 01:19:04.821507 kernel: kvm-clock: using sched offset of 31857656328 cycles Apr 16 01:19:04.821512 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 01:19:04.821517 kernel: tsc: Detected 2793.438 MHz processor Apr 16 01:19:04.821522 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 01:19:04.821527 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 01:19:04.821532 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 16 01:19:04.821537 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 16 01:19:04.821542 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 01:19:04.821549 kernel: Using GB pages for direct mapping Apr 16 01:19:04.821554 kernel: Secure boot disabled Apr 16 01:19:04.821558 kernel: ACPI: Early table checksum verification disabled Apr 16 01:19:04.821564 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 16 01:19:04.821571 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 16 01:19:04.821576 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:19:04.821581 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:19:04.821588 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 16 01:19:04.821875 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:19:04.821881 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:19:04.821886 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:19:04.821891 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 01:19:04.821896 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 16 01:19:04.821901 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 16 01:19:04.821909 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 16 01:19:04.821914 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 16 01:19:04.821919 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 16 01:19:04.821924 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 16 01:19:04.821929 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 16 01:19:04.821934 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 16 01:19:04.821939 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 16 01:19:04.821944 kernel: No NUMA configuration found Apr 16 01:19:04.822141 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 16 01:19:04.822150 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 16 01:19:04.822155 kernel: Zone ranges: Apr 16 01:19:04.822160 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 01:19:04.822165 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 16 01:19:04.822170 kernel: Normal empty Apr 16 01:19:04.822175 kernel: Movable zone start for each node Apr 16 01:19:04.822180 kernel: Early memory node ranges Apr 16 01:19:04.822185 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 16 01:19:04.822190 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 16 01:19:04.822196 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 16 01:19:04.822201 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 16 01:19:04.822206 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 16 01:19:04.822211 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 16 01:19:04.822312 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 16 01:19:04.822318 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 01:19:04.822323 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 16 01:19:04.822328 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 16 01:19:04.822333 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 01:19:04.822338 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 16 01:19:04.822345 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 16 01:19:04.822350 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 16 01:19:04.822355 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 01:19:04.822360 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 01:19:04.822365 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 01:19:04.822370 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 01:19:04.822375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 01:19:04.822381 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 01:19:04.822386 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 01:19:04.822392 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 01:19:04.822397 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 01:19:04.822402 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 01:19:04.822407 kernel: TSC deadline timer available Apr 16 01:19:04.822412 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 16 01:19:04.822417 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 01:19:04.822422 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 16 01:19:04.822427 kernel: kvm-guest: setup PV sched yield Apr 16 01:19:04.822432 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 16 01:19:04.822439 kernel: Booting paravirtualized kernel on KVM Apr 16 01:19:04.822444 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 01:19:04.822449 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 16 01:19:04.822454 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 16 01:19:04.822460 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 16 01:19:04.822464 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 16 01:19:04.822469 kernel: kvm-guest: PV spinlocks enabled Apr 16 01:19:04.822474 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 01:19:04.822480 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 01:19:04.822581 kernel: random: crng init done Apr 16 01:19:04.822587 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 01:19:04.822592 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 01:19:04.822597 kernel: Fallback order for Node 0: 0 Apr 16 01:19:04.822603 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 16 01:19:04.822608 kernel: Policy zone: DMA32 Apr 16 01:19:04.822613 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 01:19:04.822618 kernel: Memory: 2399660K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 167136K reserved, 0K cma-reserved) Apr 16 01:19:04.822626 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 01:19:04.822631 kernel: ftrace: allocating 37996 entries in 149 pages Apr 16 01:19:04.822636 kernel: ftrace: allocated 149 pages with 4 groups Apr 16 01:19:04.822641 kernel: Dynamic Preempt: voluntary Apr 16 01:19:04.822646 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 01:19:04.822658 kernel: rcu: RCU event tracing is enabled. Apr 16 01:19:04.822665 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 01:19:04.822862 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 01:19:04.822868 kernel: Rude variant of Tasks RCU enabled. Apr 16 01:19:04.822873 kernel: Tracing variant of Tasks RCU enabled. Apr 16 01:19:04.822879 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 01:19:04.822885 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 01:19:04.822893 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 16 01:19:04.822898 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 01:19:04.822904 kernel: Console: colour dummy device 80x25 Apr 16 01:19:04.822910 kernel: printk: console [ttyS0] enabled Apr 16 01:19:04.823009 kernel: ACPI: Core revision 20230628 Apr 16 01:19:04.823018 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 01:19:04.823123 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 01:19:04.823129 kernel: x2apic enabled Apr 16 01:19:04.823134 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 01:19:04.823140 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 16 01:19:04.823146 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 16 01:19:04.823151 kernel: kvm-guest: setup PV IPIs Apr 16 01:19:04.823157 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 01:19:04.823162 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 01:19:04.823170 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 16 01:19:04.823176 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 01:19:04.823181 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 01:19:04.823187 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 01:19:04.823195 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 01:19:04.823201 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 01:19:04.823207 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 01:19:04.823212 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 16 01:19:04.823219 kernel: RETBleed: Vulnerable Apr 16 01:19:04.823225 kernel: Speculative Store Bypass: Vulnerable Apr 16 01:19:04.823231 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 01:19:04.823236 kernel: GDS: Unknown: Dependent on hypervisor status Apr 16 01:19:04.823337 kernel: active return thunk: its_return_thunk Apr 16 01:19:04.823343 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 01:19:04.823349 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 01:19:04.823355 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 01:19:04.823360 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 01:19:04.823368 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 01:19:04.823373 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 01:19:04.823379 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 01:19:04.823384 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 01:19:04.823390 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 01:19:04.823395 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 01:19:04.823401 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 01:19:04.823406 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 16 01:19:04.823412 kernel: Freeing SMP alternatives memory: 32K Apr 16 01:19:04.823419 kernel: pid_max: default: 32768 minimum: 301 Apr 16 01:19:04.823425 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 16 01:19:04.823430 kernel: landlock: Up and running. Apr 16 01:19:04.823436 kernel: SELinux: Initializing. Apr 16 01:19:04.823441 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 01:19:04.823447 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 01:19:04.823453 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 16 01:19:04.823458 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 01:19:04.823464 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 01:19:04.823471 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 01:19:04.823477 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 16 01:19:04.823482 kernel: signal: max sigframe size: 3632 Apr 16 01:19:04.823488 kernel: rcu: Hierarchical SRCU implementation. Apr 16 01:19:04.823493 kernel: rcu: Max phase no-delay instances is 400. Apr 16 01:19:04.823499 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 01:19:04.823504 kernel: smp: Bringing up secondary CPUs ... Apr 16 01:19:04.823510 kernel: smpboot: x86: Booting SMP configuration: Apr 16 01:19:04.823516 kernel: .... node #0, CPUs: #1 #2 #3 Apr 16 01:19:04.823523 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 01:19:04.823529 kernel: smpboot: Max logical packages: 1 Apr 16 01:19:04.823534 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 16 01:19:04.823540 kernel: devtmpfs: initialized Apr 16 01:19:04.823545 kernel: x86/mm: Memory block size: 128MB Apr 16 01:19:04.823551 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 16 01:19:04.823556 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 16 01:19:04.823562 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 16 01:19:04.823567 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 16 01:19:04.823575 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 16 01:19:04.823580 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 01:19:04.823586 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 01:19:04.823591 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 01:19:04.823597 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 01:19:04.823602 kernel: audit: initializing netlink subsys (disabled) Apr 16 01:19:04.823608 kernel: audit: type=2000 audit(1776302327.331:1): state=initialized audit_enabled=0 res=1 Apr 16 01:19:04.823613 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 01:19:04.823619 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 01:19:04.823626 kernel: cpuidle: using governor menu Apr 16 01:19:04.823631 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 01:19:04.823637 kernel: dca service started, version 1.12.1 Apr 16 01:19:04.823642 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 16 01:19:04.823648 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 16 01:19:04.823654 kernel: PCI: Using configuration type 1 for base access Apr 16 01:19:04.823659 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 01:19:04.823665 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 01:19:04.823860 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 01:19:04.823868 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 01:19:04.823874 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 01:19:04.823879 kernel: ACPI: Added _OSI(Module Device) Apr 16 01:19:04.823885 kernel: ACPI: Added _OSI(Processor Device) Apr 16 01:19:04.823890 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 01:19:04.823896 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 01:19:04.823901 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 16 01:19:04.823907 kernel: ACPI: Interpreter enabled Apr 16 01:19:04.823912 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 01:19:04.823919 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 01:19:04.823925 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 01:19:04.823930 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 01:19:04.823936 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 01:19:04.823942 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 01:19:04.825196 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 01:19:04.825270 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 01:19:04.825330 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 01:19:04.825338 kernel: PCI host bridge to bus 0000:00 Apr 16 01:19:04.825906 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 01:19:04.825965 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 01:19:04.826146 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 01:19:04.826204 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 16 01:19:04.826254 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 16 01:19:04.826308 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 16 01:19:04.826357 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 01:19:04.826929 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 16 01:19:04.827398 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 16 01:19:04.827461 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 16 01:19:04.827517 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 16 01:19:04.827573 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 16 01:19:04.827633 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 16 01:19:04.827911 kernel: pci 0000:00:01.0: efifb_fixup_resources+0x0/0x140 took 10742 usecs Apr 16 01:19:04.828380 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 01:19:04.828510 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x110 took 17578 usecs Apr 16 01:19:04.829202 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 16 01:19:04.829267 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 16 01:19:04.829328 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 16 01:19:04.829389 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 16 01:19:04.830144 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 16 01:19:04.830208 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 16 01:19:04.830264 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 16 01:19:04.830319 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 16 01:19:04.830882 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 16 01:19:04.830950 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 16 01:19:04.831223 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 16 01:19:04.831289 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 16 01:19:04.831346 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 16 01:19:04.831522 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 16 01:19:04.831580 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 01:19:04.831634 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 19531 usecs Apr 16 01:19:04.832414 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 16 01:19:04.832477 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 16 01:19:04.832532 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 16 01:19:04.833412 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 16 01:19:04.833473 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 16 01:19:04.833481 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 01:19:04.833486 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 01:19:04.833492 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 01:19:04.833501 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 01:19:04.833507 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 01:19:04.833512 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 01:19:04.833518 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 01:19:04.833524 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 01:19:04.833530 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 01:19:04.833536 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 01:19:04.833541 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 01:19:04.833547 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 01:19:04.833555 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 01:19:04.833560 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 01:19:04.833566 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 01:19:04.833571 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 01:19:04.833577 kernel: iommu: Default domain type: Translated Apr 16 01:19:04.833583 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 01:19:04.833589 kernel: efivars: Registered efivars operations Apr 16 01:19:04.833595 kernel: PCI: Using ACPI for IRQ routing Apr 16 01:19:04.833600 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 01:19:04.833608 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 16 01:19:04.833614 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 16 01:19:04.833619 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 16 01:19:04.833625 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 16 01:19:04.833983 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 01:19:04.834260 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 01:19:04.834325 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 01:19:04.834332 kernel: vgaarb: loaded Apr 16 01:19:04.834338 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 01:19:04.834347 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 01:19:04.834353 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 01:19:04.834358 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 01:19:04.834364 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 01:19:04.834370 kernel: pnp: PnP ACPI init Apr 16 01:19:04.835397 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 16 01:19:04.835409 kernel: pnp: PnP ACPI: found 6 devices Apr 16 01:19:04.835415 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 01:19:04.835426 kernel: NET: Registered PF_INET protocol family Apr 16 01:19:04.835433 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 01:19:04.835440 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 01:19:04.835446 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 01:19:04.835453 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 01:19:04.835460 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 01:19:04.835467 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 01:19:04.835473 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 01:19:04.835481 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 01:19:04.835488 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 01:19:04.835494 kernel: NET: Registered PF_XDP protocol family Apr 16 01:19:04.835556 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 16 01:19:04.835613 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 16 01:19:04.835668 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 01:19:04.835942 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 01:19:04.835992 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 01:19:04.836158 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 16 01:19:04.836213 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 16 01:19:04.836263 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 16 01:19:04.836271 kernel: PCI: CLS 0 bytes, default 64 Apr 16 01:19:04.836277 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 01:19:04.836283 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 01:19:04.836289 kernel: Initialise system trusted keyrings Apr 16 01:19:04.836294 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 01:19:04.836300 kernel: Key type asymmetric registered Apr 16 01:19:04.836308 kernel: Asymmetric key parser 'x509' registered Apr 16 01:19:04.836314 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 16 01:19:04.836320 kernel: io scheduler mq-deadline registered Apr 16 01:19:04.836326 kernel: io scheduler kyber registered Apr 16 01:19:04.836331 kernel: io scheduler bfq registered Apr 16 01:19:04.836337 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 01:19:04.836343 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 01:19:04.836349 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 01:19:04.836355 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 01:19:04.836362 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 01:19:04.836368 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 01:19:04.836374 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 01:19:04.836379 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 01:19:04.836385 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 01:19:04.837141 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 16 01:19:04.837152 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 01:19:04.837209 kernel: rtc_cmos 00:04: registered as rtc0 Apr 16 01:19:04.837266 kernel: rtc_cmos 00:04: setting system clock to 2026-04-16T01:19:02 UTC (1776302342) Apr 16 01:19:04.837318 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 16 01:19:04.837325 kernel: intel_pstate: CPU model not supported Apr 16 01:19:04.837331 kernel: efifb: probing for efifb Apr 16 01:19:04.837337 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 16 01:19:04.837342 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 16 01:19:04.837348 kernel: efifb: scrolling: redraw Apr 16 01:19:04.837366 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 16 01:19:04.837373 kernel: Console: switching to colour frame buffer device 100x37 Apr 16 01:19:04.837380 kernel: fb0: EFI VGA frame buffer device Apr 16 01:19:04.837386 kernel: pstore: Using crash dump compression: deflate Apr 16 01:19:04.837392 kernel: pstore: Registered efi_pstore as persistent store backend Apr 16 01:19:04.837398 kernel: NET: Registered PF_INET6 protocol family Apr 16 01:19:04.837403 kernel: Segment Routing with IPv6 Apr 16 01:19:04.837409 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 01:19:04.837415 kernel: NET: Registered PF_PACKET protocol family Apr 16 01:19:04.837421 kernel: Key type dns_resolver registered Apr 16 01:19:04.837427 kernel: IPI shorthand broadcast: enabled Apr 16 01:19:04.837434 kernel: sched_clock: Marking stable (13092114779, 3318034826)->(18573335671, -2163186066) Apr 16 01:19:04.837440 kernel: registered taskstats version 1 Apr 16 01:19:04.837445 kernel: Loading compiled-in X.509 certificates Apr 16 01:19:04.837451 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6e6d886174c86dc730e1b14e46a1dab518d9b090' Apr 16 01:19:04.837457 kernel: Key type .fscrypt registered Apr 16 01:19:04.837463 kernel: Key type fscrypt-provisioning registered Apr 16 01:19:04.837469 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 01:19:04.837474 kernel: ima: Allocated hash algorithm: sha1 Apr 16 01:19:04.837480 kernel: ima: No architecture policies found Apr 16 01:19:04.837488 kernel: clk: Disabling unused clocks Apr 16 01:19:04.837494 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 16 01:19:04.837500 kernel: Write protecting the kernel read-only data: 36864k Apr 16 01:19:04.837506 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 16 01:19:04.837513 kernel: Run /init as init process Apr 16 01:19:04.837519 kernel: with arguments: Apr 16 01:19:04.837526 kernel: /init Apr 16 01:19:04.837532 kernel: with environment: Apr 16 01:19:04.837538 kernel: HOME=/ Apr 16 01:19:04.837544 kernel: TERM=linux Apr 16 01:19:04.837553 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 01:19:04.837561 systemd[1]: Detected virtualization kvm. Apr 16 01:19:04.837567 systemd[1]: Detected architecture x86-64. Apr 16 01:19:04.837575 systemd[1]: Running in initrd. Apr 16 01:19:04.837581 systemd[1]: No hostname configured, using default hostname. Apr 16 01:19:04.837587 systemd[1]: Hostname set to . Apr 16 01:19:04.837593 systemd[1]: Initializing machine ID from VM UUID. Apr 16 01:19:04.837599 systemd[1]: Queued start job for default target initrd.target. Apr 16 01:19:04.837605 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 01:19:04.837611 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 01:19:04.837618 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 01:19:04.837626 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 01:19:04.837632 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 01:19:04.837639 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 01:19:04.837646 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 01:19:04.837652 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 01:19:04.837658 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 01:19:04.837665 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 01:19:04.837894 systemd[1]: Reached target paths.target - Path Units. Apr 16 01:19:04.837901 systemd[1]: Reached target slices.target - Slice Units. Apr 16 01:19:04.837907 systemd[1]: Reached target swap.target - Swaps. Apr 16 01:19:04.837913 systemd[1]: Reached target timers.target - Timer Units. Apr 16 01:19:04.837919 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 01:19:04.837925 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 01:19:04.837932 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 01:19:04.837938 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 16 01:19:04.837944 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 01:19:04.837952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 01:19:04.837959 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 01:19:04.837965 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 01:19:04.837971 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 01:19:04.837977 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 01:19:04.837983 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 01:19:04.837989 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 01:19:04.837995 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 01:19:04.838003 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 01:19:04.838129 systemd-journald[194]: Collecting audit messages is disabled. Apr 16 01:19:04.838147 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:19:04.838155 systemd-journald[194]: Journal started Apr 16 01:19:04.838173 systemd-journald[194]: Runtime Journal (/run/log/journal/46cf63f10f8a4ca4855712de89d8736b) is 6.0M, max 48.3M, 42.2M free. Apr 16 01:19:04.878622 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 01:19:04.894195 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 01:19:04.931022 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 01:19:04.934329 systemd-modules-load[195]: Inserted module 'overlay' Apr 16 01:19:04.968001 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 01:19:05.010020 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 01:19:05.037573 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 01:19:05.081607 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 01:19:05.187969 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 01:19:05.190168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 01:19:05.207583 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:19:05.208246 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 01:19:05.234205 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 01:19:05.299991 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 01:19:05.336400 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 01:19:05.366518 kernel: Bridge firewalling registered Apr 16 01:19:05.367587 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 16 01:19:05.381324 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 01:19:05.399163 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 01:19:05.446650 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 01:19:05.478165 dracut-cmdline[223]: dracut-dracut-053 Apr 16 01:19:05.478165 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 01:19:05.510128 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 01:19:05.589964 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 01:19:05.654264 kernel: SCSI subsystem initialized Apr 16 01:19:05.696950 kernel: Loading iSCSI transport class v2.0-870. Apr 16 01:19:05.700637 systemd-resolved[294]: Positive Trust Anchors: Apr 16 01:19:05.701419 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 01:19:05.701448 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 01:19:05.706437 systemd-resolved[294]: Defaulting to hostname 'linux'. Apr 16 01:19:05.807214 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 01:19:05.824372 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 01:19:05.907206 kernel: iscsi: registered transport (tcp) Apr 16 01:19:05.958186 kernel: iscsi: registered transport (qla4xxx) Apr 16 01:19:05.958562 kernel: QLogic iSCSI HBA Driver Apr 16 01:19:06.088598 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 01:19:06.125453 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 01:19:06.222522 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 01:19:06.223015 kernel: device-mapper: uevent: version 1.0.3 Apr 16 01:19:06.238663 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 16 01:19:06.351022 kernel: raid6: avx512x4 gen() 30742 MB/s Apr 16 01:19:06.375237 kernel: raid6: avx512x2 gen() 27123 MB/s Apr 16 01:19:06.399243 kernel: raid6: avx512x1 gen() 23825 MB/s Apr 16 01:19:06.423229 kernel: raid6: avx2x4 gen() 24632 MB/s Apr 16 01:19:06.447247 kernel: raid6: avx2x2 gen() 23389 MB/s Apr 16 01:19:06.480205 kernel: raid6: avx2x1 gen() 16636 MB/s Apr 16 01:19:06.480484 kernel: raid6: using algorithm avx512x4 gen() 30742 MB/s Apr 16 01:19:06.514166 kernel: raid6: .... xor() 6558 MB/s, rmw enabled Apr 16 01:19:06.514429 kernel: raid6: using avx512x2 recovery algorithm Apr 16 01:19:06.559346 kernel: xor: automatically using best checksumming function avx Apr 16 01:19:07.042434 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 01:19:07.075635 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 01:19:07.111233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 01:19:07.137947 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 16 01:19:07.155577 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 01:19:07.202275 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 01:19:07.275148 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Apr 16 01:19:07.360551 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 01:19:07.396530 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 01:19:07.540270 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 01:19:07.579575 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 01:19:07.612609 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 01:19:07.627319 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 01:19:07.674427 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 01:19:07.686949 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 01:19:07.740538 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 01:19:07.797246 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 01:19:07.838255 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 01:19:07.838276 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 01:19:07.854454 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 01:19:07.854916 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 01:19:07.939608 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 16 01:19:07.940186 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 01:19:07.940196 kernel: GPT:9289727 != 19775487 Apr 16 01:19:07.940204 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 01:19:07.940218 kernel: GPT:9289727 != 19775487 Apr 16 01:19:07.940225 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 01:19:07.940232 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:19:07.990428 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 01:19:07.997167 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 01:19:07.997503 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:19:08.021009 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:19:08.158257 kernel: AVX2 version of gcm_enc/dec engaged. Apr 16 01:19:08.158311 kernel: libata version 3.00 loaded. Apr 16 01:19:08.168247 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:19:08.216424 kernel: AES CTR mode by8 optimization enabled Apr 16 01:19:08.218965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 01:19:08.221911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:19:08.267301 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 01:19:08.267511 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 01:19:08.281393 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:19:08.314988 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 16 01:19:08.315453 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 01:19:08.340002 kernel: scsi host0: ahci Apr 16 01:19:08.355180 kernel: scsi host1: ahci Apr 16 01:19:08.355374 kernel: scsi host2: ahci Apr 16 01:19:08.394037 kernel: scsi host3: ahci Apr 16 01:19:08.394356 kernel: BTRFS: device fsid 936fcbd8-a8ab-4e87-b115-d77c7a08e984 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (471) Apr 16 01:19:08.394367 kernel: scsi host4: ahci Apr 16 01:19:08.393188 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 01:19:08.534587 kernel: scsi host5: ahci Apr 16 01:19:08.536208 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Apr 16 01:19:08.536247 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 16 01:19:08.536257 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 16 01:19:08.536265 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 16 01:19:08.536276 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 16 01:19:08.536289 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 16 01:19:08.536303 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 16 01:19:08.401965 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 01:19:08.536456 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 01:19:08.560184 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:19:08.593390 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 01:19:08.624322 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 01:19:08.662905 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 01:19:08.713611 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 01:19:08.777587 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 01:19:08.788666 disk-uuid[568]: Primary Header is updated. Apr 16 01:19:08.788666 disk-uuid[568]: Secondary Entries is updated. Apr 16 01:19:08.788666 disk-uuid[568]: Secondary Header is updated. Apr 16 01:19:08.955895 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 01:19:08.955947 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 01:19:08.955956 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:19:08.955963 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 01:19:08.955970 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 01:19:08.955978 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 01:19:08.955992 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 01:19:08.955999 kernel: ata3.00: applying bridge limits Apr 16 01:19:08.956006 kernel: ata3.00: configured for UDMA/100 Apr 16 01:19:08.956013 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:19:08.956020 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 01:19:08.956208 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:19:08.828234 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 01:19:09.105306 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 01:19:09.105528 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 01:19:09.135367 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 16 01:19:09.883838 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 01:19:09.885359 disk-uuid[571]: The operation has completed successfully. Apr 16 01:19:09.934195 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 01:19:09.934426 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 01:19:09.967519 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 01:19:10.002233 sh[602]: Success Apr 16 01:19:10.029934 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 16 01:19:10.105240 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 01:19:10.134116 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 01:19:10.138605 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 01:19:10.172872 kernel: BTRFS info (device dm-0): first mount of filesystem 936fcbd8-a8ab-4e87-b115-d77c7a08e984 Apr 16 01:19:10.173032 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:19:10.173046 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 16 01:19:10.178203 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 16 01:19:10.185152 kernel: BTRFS info (device dm-0): using free space tree Apr 16 01:19:10.199886 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 01:19:10.202141 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 01:19:10.239018 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 01:19:10.244353 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 01:19:10.296112 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:19:10.296288 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:19:10.296301 kernel: BTRFS info (device vda6): using free space tree Apr 16 01:19:10.312819 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 01:19:10.331546 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 16 01:19:10.341641 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:19:10.351110 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 01:19:10.368022 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 01:19:10.555011 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 01:19:10.576601 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 01:19:10.578244 ignition[714]: Ignition 2.19.0 Apr 16 01:19:10.578249 ignition[714]: Stage: fetch-offline Apr 16 01:19:10.578487 ignition[714]: no configs at "/usr/lib/ignition/base.d" Apr 16 01:19:10.578494 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:19:10.578809 ignition[714]: parsed url from cmdline: "" Apr 16 01:19:10.578812 ignition[714]: no config URL provided Apr 16 01:19:10.578816 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 01:19:10.578821 ignition[714]: no config at "/usr/lib/ignition/user.ign" Apr 16 01:19:10.609401 systemd-networkd[789]: lo: Link UP Apr 16 01:19:10.578902 ignition[714]: op(1): [started] loading QEMU firmware config module Apr 16 01:19:10.609404 systemd-networkd[789]: lo: Gained carrier Apr 16 01:19:10.578906 ignition[714]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 16 01:19:10.611497 systemd-networkd[789]: Enumeration completed Apr 16 01:19:10.623436 ignition[714]: op(1): [finished] loading QEMU firmware config module Apr 16 01:19:10.611791 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 01:19:10.613987 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:19:10.613990 systemd-networkd[789]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 01:19:10.620645 systemd-networkd[789]: eth0: Link UP Apr 16 01:19:10.620649 systemd-networkd[789]: eth0: Gained carrier Apr 16 01:19:10.620655 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:19:10.626034 systemd[1]: Reached target network.target - Network. Apr 16 01:19:10.674167 systemd-networkd[789]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 01:19:11.167839 ignition[714]: parsing config with SHA512: 8622fcc09593da1c933578a5de64dc853d410c836f0f056989b0f740186700aed267331ede56fa1efb1bd36f56f33761cebf641f9c883fb1a7be8e8fd7acbf8e Apr 16 01:19:11.185999 unknown[714]: fetched base config from "system" Apr 16 01:19:11.186115 unknown[714]: fetched user config from "qemu" Apr 16 01:19:11.187046 ignition[714]: fetch-offline: fetch-offline passed Apr 16 01:19:11.189107 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 01:19:11.187244 ignition[714]: Ignition finished successfully Apr 16 01:19:11.198352 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 16 01:19:11.215972 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 01:19:11.288883 ignition[794]: Ignition 2.19.0 Apr 16 01:19:11.288948 ignition[794]: Stage: kargs Apr 16 01:19:11.289218 ignition[794]: no configs at "/usr/lib/ignition/base.d" Apr 16 01:19:11.289227 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:19:11.296926 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 01:19:11.290411 ignition[794]: kargs: kargs passed Apr 16 01:19:11.290460 ignition[794]: Ignition finished successfully Apr 16 01:19:11.334296 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 01:19:11.379902 ignition[802]: Ignition 2.19.0 Apr 16 01:19:11.379953 ignition[802]: Stage: disks Apr 16 01:19:11.380265 ignition[802]: no configs at "/usr/lib/ignition/base.d" Apr 16 01:19:11.380273 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:19:11.381407 ignition[802]: disks: disks passed Apr 16 01:19:11.381441 ignition[802]: Ignition finished successfully Apr 16 01:19:11.406602 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 01:19:11.419181 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 01:19:11.423808 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 01:19:11.434860 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 01:19:11.454402 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 01:19:11.458777 systemd[1]: Reached target basic.target - Basic System. Apr 16 01:19:11.485946 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 01:19:11.519163 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 16 01:19:11.525929 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 01:19:11.548939 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 01:19:11.743863 kernel: EXT4-fs (vda9): mounted filesystem 9ac74074-8829-477f-a4c4-5563740ec49b r/w with ordered data mode. Quota mode: none. Apr 16 01:19:11.744903 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 01:19:11.749464 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 01:19:11.774878 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 01:19:11.781525 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 01:19:11.794530 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 01:19:11.794617 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 01:19:11.794635 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 01:19:11.805405 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 01:19:11.817309 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 01:19:11.855923 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (820) Apr 16 01:19:11.866604 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:19:11.866920 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:19:11.866933 kernel: BTRFS info (device vda6): using free space tree Apr 16 01:19:11.876615 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 01:19:11.887428 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 01:19:11.890416 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 01:19:11.900054 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Apr 16 01:19:11.907602 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 01:19:11.922173 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 01:19:12.103610 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 01:19:12.130874 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 01:19:12.142864 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 01:19:12.161121 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 01:19:12.177575 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:19:12.169039 systemd-networkd[789]: eth0: Gained IPv6LL Apr 16 01:19:12.198853 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 01:19:12.274121 ignition[936]: INFO : Ignition 2.19.0 Apr 16 01:19:12.274121 ignition[936]: INFO : Stage: mount Apr 16 01:19:12.289363 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 01:19:12.289363 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:19:12.289363 ignition[936]: INFO : mount: mount passed Apr 16 01:19:12.289363 ignition[936]: INFO : Ignition finished successfully Apr 16 01:19:12.276914 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 01:19:12.300164 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 01:19:12.756051 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 01:19:12.783972 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (947) Apr 16 01:19:12.784019 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 01:19:12.795143 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 01:19:12.795434 kernel: BTRFS info (device vda6): using free space tree Apr 16 01:19:12.815896 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 01:19:12.818373 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 01:19:12.871169 ignition[964]: INFO : Ignition 2.19.0 Apr 16 01:19:12.871169 ignition[964]: INFO : Stage: files Apr 16 01:19:12.871169 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 01:19:12.871169 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:19:12.891869 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Apr 16 01:19:12.900016 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 01:19:12.900016 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 01:19:12.915536 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 01:19:12.915536 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 01:19:12.915536 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 01:19:12.915536 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 01:19:12.915536 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 01:19:12.908286 unknown[964]: wrote ssh authorized keys file for user: core Apr 16 01:19:13.057563 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 01:19:13.210310 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 01:19:13.210310 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 01:19:13.230909 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 01:19:13.230909 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 01:19:13.230909 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 01:19:13.230909 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 01:19:13.230909 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 01:19:13.230909 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 01:19:13.230909 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 01:19:13.230909 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 01:19:13.230909 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 01:19:13.230909 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 16 01:19:13.230909 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 16 01:19:13.230909 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 16 01:19:13.230909 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 16 01:19:13.501606 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 01:19:13.923419 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 16 01:19:13.923419 ignition[964]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 01:19:13.949884 ignition[964]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 01:19:13.962884 ignition[964]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 01:19:13.962884 ignition[964]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 01:19:13.962884 ignition[964]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 16 01:19:13.962884 ignition[964]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 01:19:13.962884 ignition[964]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 01:19:13.962884 ignition[964]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 16 01:19:13.962884 ignition[964]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 16 01:19:14.032032 ignition[964]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 01:19:14.040953 ignition[964]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 01:19:14.049265 ignition[964]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 16 01:19:14.049265 ignition[964]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 16 01:19:14.049265 ignition[964]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 01:19:14.049265 ignition[964]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 01:19:14.049265 ignition[964]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 01:19:14.049265 ignition[964]: INFO : files: files passed Apr 16 01:19:14.049265 ignition[964]: INFO : Ignition finished successfully Apr 16 01:19:14.093147 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 01:19:14.113992 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 01:19:14.119409 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 01:19:14.144230 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Apr 16 01:19:14.151388 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 01:19:14.151388 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 01:19:14.168586 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 01:19:14.156531 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 01:19:14.165397 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 01:19:14.181805 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 01:19:14.204277 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 01:19:14.204365 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 01:19:14.231147 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 01:19:14.236576 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 01:19:14.253667 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 01:19:14.264323 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 01:19:14.275242 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 01:19:14.292951 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 01:19:14.313275 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 01:19:14.319297 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 01:19:14.345417 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 01:19:14.357410 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 01:19:14.359944 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 01:19:14.371147 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 01:19:14.371875 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 01:19:14.384379 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 01:19:14.393588 systemd[1]: Stopped target basic.target - Basic System. Apr 16 01:19:14.404565 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 01:19:14.414418 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 01:19:14.424481 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 01:19:14.435477 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 01:19:14.445497 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 01:19:14.459610 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 01:19:14.466786 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 01:19:14.485250 systemd[1]: Stopped target swap.target - Swaps. Apr 16 01:19:14.495407 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 01:19:14.495562 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 01:19:14.509588 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 01:19:14.512855 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 01:19:14.523834 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 01:19:14.524067 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 01:19:14.537438 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 01:19:14.537614 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 01:19:14.555906 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 01:19:14.556133 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 01:19:14.564862 systemd[1]: Stopped target paths.target - Path Units. Apr 16 01:19:14.577596 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 01:19:14.577876 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 01:19:14.587497 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 01:19:14.599865 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 01:19:14.607219 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 01:19:14.607342 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 01:19:14.619022 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 01:19:14.619183 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 01:19:14.628224 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 01:19:14.628408 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 01:19:14.640921 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 01:19:14.641060 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 01:19:14.709156 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 01:19:14.716936 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 01:19:14.723525 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 01:19:14.723666 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 01:19:14.736239 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 01:19:14.755628 ignition[1018]: INFO : Ignition 2.19.0 Apr 16 01:19:14.755628 ignition[1018]: INFO : Stage: umount Apr 16 01:19:14.755628 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 01:19:14.755628 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 01:19:14.755628 ignition[1018]: INFO : umount: umount passed Apr 16 01:19:14.755628 ignition[1018]: INFO : Ignition finished successfully Apr 16 01:19:14.736336 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 01:19:14.754573 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 01:19:14.757793 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 01:19:14.757934 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 01:19:14.764021 systemd[1]: Stopped target network.target - Network. Apr 16 01:19:14.772661 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 01:19:14.772795 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 01:19:14.784423 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 01:19:14.784485 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 01:19:14.794354 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 01:19:14.794400 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 01:19:14.804945 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 01:19:14.804980 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 01:19:14.809154 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 01:19:14.817889 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 01:19:14.829627 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 01:19:14.829864 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 01:19:14.842482 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 01:19:14.842625 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 01:19:14.848518 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 01:19:14.848661 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 01:19:14.848841 systemd-networkd[789]: eth0: DHCPv6 lease lost Apr 16 01:19:14.861444 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 01:19:14.861481 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 01:19:14.872134 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 01:19:14.872172 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 01:19:14.881399 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 01:19:14.881544 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 01:19:14.893469 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 01:19:14.893510 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 01:19:14.974023 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 01:19:14.981950 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 01:19:14.982018 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 01:19:14.990052 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 01:19:14.990159 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 01:19:15.000856 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 01:19:15.000891 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 01:19:15.012932 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 01:19:15.054293 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 01:19:15.054463 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 01:19:15.092556 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 01:19:15.092666 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 01:19:15.100403 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 01:19:15.100445 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 01:19:15.108795 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 01:19:15.108819 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 01:19:15.122536 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 01:19:15.122575 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 01:19:15.140823 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 01:19:15.140859 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 01:19:15.152862 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 01:19:15.152898 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 01:19:15.166055 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 01:19:15.171231 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 01:19:15.171274 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 01:19:15.184399 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 16 01:19:15.184430 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 01:19:15.194148 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 01:19:15.194179 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 01:19:15.207149 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 01:19:15.207182 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:19:15.216226 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 01:19:15.216352 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 01:19:15.226961 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 01:19:15.238147 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 01:19:15.251940 systemd[1]: Switching root. Apr 16 01:19:15.339028 systemd-journald[194]: Journal stopped Apr 16 01:19:16.818336 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 16 01:19:16.818390 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 01:19:16.818404 kernel: SELinux: policy capability open_perms=1 Apr 16 01:19:16.818412 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 01:19:16.818422 kernel: SELinux: policy capability always_check_network=0 Apr 16 01:19:16.818430 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 01:19:16.818438 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 01:19:16.818445 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 01:19:16.818456 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 01:19:16.818464 kernel: audit: type=1403 audit(1776302355.519:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 01:19:16.818473 systemd[1]: Successfully loaded SELinux policy in 63.745ms. Apr 16 01:19:16.818492 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.867ms. Apr 16 01:19:16.818502 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 01:19:16.818510 systemd[1]: Detected virtualization kvm. Apr 16 01:19:16.818518 systemd[1]: Detected architecture x86-64. Apr 16 01:19:16.818527 systemd[1]: Detected first boot. Apr 16 01:19:16.818536 systemd[1]: Initializing machine ID from VM UUID. Apr 16 01:19:16.818545 zram_generator::config[1063]: No configuration found. Apr 16 01:19:16.818555 systemd[1]: Populated /etc with preset unit settings. Apr 16 01:19:16.818563 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 01:19:16.818571 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 01:19:16.818579 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 01:19:16.818587 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 01:19:16.818596 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 01:19:16.818605 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 01:19:16.818612 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 01:19:16.818620 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 01:19:16.818629 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 01:19:16.818636 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 01:19:16.818644 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 01:19:16.818652 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 01:19:16.818660 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 01:19:16.818764 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 01:19:16.818776 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 01:19:16.818785 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 01:19:16.818793 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 01:19:16.818800 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 01:19:16.818810 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 01:19:16.818817 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 01:19:16.818825 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 01:19:16.818833 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 01:19:16.818843 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 01:19:16.818851 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 01:19:16.818859 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 01:19:16.818867 systemd[1]: Reached target slices.target - Slice Units. Apr 16 01:19:16.818875 systemd[1]: Reached target swap.target - Swaps. Apr 16 01:19:16.818883 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 01:19:16.818890 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 01:19:16.818898 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 01:19:16.818907 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 01:19:16.818915 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 01:19:16.818922 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 01:19:16.818930 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 01:19:16.818938 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 01:19:16.818946 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 01:19:16.818954 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:19:16.818962 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 01:19:16.818970 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 01:19:16.818980 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 01:19:16.818988 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 01:19:16.818995 systemd[1]: Reached target machines.target - Containers. Apr 16 01:19:16.819003 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 01:19:16.819011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 01:19:16.819018 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 01:19:16.819026 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 01:19:16.819033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 01:19:16.819043 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 01:19:16.819050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 01:19:16.819058 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 01:19:16.819066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 01:19:16.819073 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 01:19:16.819081 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 01:19:16.819149 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 01:19:16.819158 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 01:19:16.819166 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 01:19:16.819176 kernel: fuse: init (API version 7.39) Apr 16 01:19:16.819183 kernel: ACPI: bus type drm_connector registered Apr 16 01:19:16.819190 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 01:19:16.819197 kernel: loop: module loaded Apr 16 01:19:16.819205 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 01:19:16.819231 systemd-journald[1147]: Collecting audit messages is disabled. Apr 16 01:19:16.819249 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 01:19:16.819258 systemd-journald[1147]: Journal started Apr 16 01:19:16.819277 systemd-journald[1147]: Runtime Journal (/run/log/journal/46cf63f10f8a4ca4855712de89d8736b) is 6.0M, max 48.3M, 42.2M free. Apr 16 01:19:16.049601 systemd[1]: Queued start job for default target multi-user.target. Apr 16 01:19:16.072842 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 01:19:16.073399 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 01:19:16.073854 systemd[1]: systemd-journald.service: Consumed 3.087s CPU time. Apr 16 01:19:16.839626 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 01:19:16.852592 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 01:19:16.865353 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 01:19:16.865402 systemd[1]: Stopped verity-setup.service. Apr 16 01:19:16.881842 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:19:16.888339 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 01:19:16.893585 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 01:19:16.899446 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 01:19:16.906216 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 01:19:16.911838 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 01:19:16.918025 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 01:19:16.925364 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 01:19:16.931069 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 01:19:16.939144 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 01:19:16.946378 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 01:19:16.946657 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 01:19:16.954812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 01:19:16.955077 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 01:19:16.963535 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 01:19:16.963881 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 01:19:16.970053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 01:19:16.970330 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 01:19:16.977252 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 01:19:16.977476 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 01:19:16.984053 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 01:19:16.984309 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 01:19:16.991345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 01:19:16.997846 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 01:19:17.005080 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 01:19:17.012257 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 01:19:17.029529 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 01:19:17.046310 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 01:19:17.055394 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 01:19:17.061441 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 01:19:17.061526 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 01:19:17.068037 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 16 01:19:17.076585 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 01:19:17.084861 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 01:19:17.090430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 01:19:17.092805 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 01:19:17.100417 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 01:19:17.106656 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 01:19:17.108205 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 01:19:17.114041 systemd-journald[1147]: Time spent on flushing to /var/log/journal/46cf63f10f8a4ca4855712de89d8736b is 19.860ms for 998 entries. Apr 16 01:19:17.114041 systemd-journald[1147]: System Journal (/var/log/journal/46cf63f10f8a4ca4855712de89d8736b) is 8.0M, max 195.6M, 187.6M free. Apr 16 01:19:17.145331 systemd-journald[1147]: Received client request to flush runtime journal. Apr 16 01:19:17.114076 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 01:19:17.120855 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 01:19:17.139899 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 01:19:17.155301 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 01:19:17.158926 kernel: loop0: detected capacity change from 0 to 140768 Apr 16 01:19:17.167877 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 16 01:19:17.178086 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 01:19:17.185447 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 01:19:17.192905 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 01:19:17.200949 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 01:19:17.207603 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Apr 16 01:19:17.207612 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Apr 16 01:19:17.215995 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 01:19:17.221187 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 01:19:17.228470 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 01:19:17.234773 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 01:19:17.251823 kernel: loop1: detected capacity change from 0 to 217752 Apr 16 01:19:17.255067 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 01:19:17.270018 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 16 01:19:17.278442 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 01:19:17.284065 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 16 01:19:17.296531 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 01:19:17.297272 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 16 01:19:17.320233 kernel: loop2: detected capacity change from 0 to 142488 Apr 16 01:19:17.321394 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 01:19:17.338922 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 01:19:17.364321 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Apr 16 01:19:17.364335 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Apr 16 01:19:17.369945 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 01:19:17.381495 kernel: loop3: detected capacity change from 0 to 140768 Apr 16 01:19:17.408915 kernel: loop4: detected capacity change from 0 to 217752 Apr 16 01:19:17.433805 kernel: loop5: detected capacity change from 0 to 142488 Apr 16 01:19:17.463380 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 16 01:19:17.463862 (sd-merge)[1207]: Merged extensions into '/usr'. Apr 16 01:19:17.470471 systemd[1]: Reloading requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 01:19:17.470531 systemd[1]: Reloading... Apr 16 01:19:17.537909 zram_generator::config[1235]: No configuration found. Apr 16 01:19:17.662955 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:19:17.675957 ldconfig[1173]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 01:19:17.702582 systemd[1]: Reloading finished in 231 ms. Apr 16 01:19:17.744435 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 01:19:17.751154 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 01:19:17.758190 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 01:19:17.791148 systemd[1]: Starting ensure-sysext.service... Apr 16 01:19:17.797186 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 01:19:17.805543 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 01:19:17.814849 systemd[1]: Reloading requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... Apr 16 01:19:17.814859 systemd[1]: Reloading... Apr 16 01:19:17.825258 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 01:19:17.825593 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 01:19:17.826495 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 01:19:17.826834 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Apr 16 01:19:17.826871 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Apr 16 01:19:17.829354 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 01:19:17.829413 systemd-tmpfiles[1273]: Skipping /boot Apr 16 01:19:17.835600 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 01:19:17.835822 systemd-tmpfiles[1273]: Skipping /boot Apr 16 01:19:17.850185 systemd-udevd[1274]: Using default interface naming scheme 'v255'. Apr 16 01:19:17.882994 zram_generator::config[1303]: No configuration found. Apr 16 01:19:17.951933 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1314) Apr 16 01:19:17.975807 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 16 01:19:17.987806 kernel: ACPI: button: Power Button [PWRF] Apr 16 01:19:18.013510 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:19:18.042944 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 16 01:19:18.049966 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 16 01:19:18.050351 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 01:19:18.066776 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 16 01:19:18.066948 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 01:19:18.067022 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 01:19:18.061933 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 01:19:18.075002 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 01:19:18.075286 systemd[1]: Reloading finished in 260 ms. Apr 16 01:19:18.089172 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 01:19:18.105495 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 01:19:18.144281 systemd[1]: Finished ensure-sysext.service. Apr 16 01:19:18.219644 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:19:18.315165 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 01:19:18.323525 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 01:19:18.331072 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 01:19:18.332803 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 01:19:18.341893 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 01:19:18.350200 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 01:19:18.362822 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 01:19:18.372967 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 01:19:18.380545 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 01:19:18.389064 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 01:19:18.399433 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 01:19:18.407841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 01:19:18.415592 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 01:19:18.423065 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 01:19:18.433882 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 01:19:18.441615 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 01:19:18.448566 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 01:19:18.449655 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 01:19:18.456272 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 01:19:18.456613 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 01:19:18.464038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 01:19:18.464563 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 01:19:18.473590 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 01:19:18.475657 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 01:19:18.483544 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 01:19:18.535951 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 01:19:18.547569 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 01:19:18.550048 augenrules[1400]: No rules Apr 16 01:19:18.563623 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 01:19:18.604478 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 01:19:18.606463 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 01:19:18.764451 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 01:19:18.771263 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 01:19:18.775180 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 01:19:18.776573 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 01:19:18.780236 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 01:19:18.791497 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 01:19:18.815266 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 16 01:19:18.823773 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 16 01:19:18.845251 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 01:19:18.854152 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 01:19:18.881535 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 16 01:19:18.889660 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 01:19:18.903926 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 16 01:19:18.911202 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 01:19:18.928319 systemd-networkd[1385]: lo: Link UP Apr 16 01:19:18.928372 systemd-networkd[1385]: lo: Gained carrier Apr 16 01:19:18.929332 systemd-networkd[1385]: Enumeration completed Apr 16 01:19:18.929514 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 01:19:18.931017 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:19:18.931072 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 01:19:18.932768 systemd-networkd[1385]: eth0: Link UP Apr 16 01:19:18.932817 systemd-networkd[1385]: eth0: Gained carrier Apr 16 01:19:18.932829 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 01:19:18.934579 systemd-resolved[1388]: Positive Trust Anchors: Apr 16 01:19:18.934885 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 01:19:18.934942 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 01:19:18.936323 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 01:19:18.939816 systemd-resolved[1388]: Defaulting to hostname 'linux'. Apr 16 01:19:18.942584 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 01:19:18.948357 systemd[1]: Reached target network.target - Network. Apr 16 01:19:18.953149 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 01:19:18.960798 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 01:19:18.967538 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 01:19:18.973995 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 01:19:18.978998 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 01:19:18.979643 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Apr 16 01:19:18.980620 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 01:19:18.980794 systemd-timesyncd[1390]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 16 01:19:18.980821 systemd-timesyncd[1390]: Initial clock synchronization to Thu 2026-04-16 01:19:19.091109 UTC. Apr 16 01:19:18.987324 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 01:19:18.987401 systemd[1]: Reached target paths.target - Path Units. Apr 16 01:19:18.992345 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 01:19:18.997946 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 01:19:19.005058 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 01:19:19.011940 systemd[1]: Reached target timers.target - Timer Units. Apr 16 01:19:19.018058 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 01:19:19.026363 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 01:19:19.039901 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 01:19:19.046865 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 01:19:19.053632 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 16 01:19:19.060627 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 01:19:19.066860 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 01:19:19.072046 systemd[1]: Reached target basic.target - Basic System. Apr 16 01:19:19.077191 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 01:19:19.077216 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 01:19:19.093168 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 01:19:19.100631 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 01:19:19.107393 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 01:19:19.114082 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 01:19:19.119282 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 01:19:19.120573 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 01:19:19.126935 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 01:19:19.135340 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 01:19:19.140824 extend-filesystems[1441]: Found loop3 Apr 16 01:19:19.146246 extend-filesystems[1441]: Found loop4 Apr 16 01:19:19.146246 extend-filesystems[1441]: Found loop5 Apr 16 01:19:19.146246 extend-filesystems[1441]: Found sr0 Apr 16 01:19:19.146246 extend-filesystems[1441]: Found vda Apr 16 01:19:19.146246 extend-filesystems[1441]: Found vda1 Apr 16 01:19:19.146246 extend-filesystems[1441]: Found vda2 Apr 16 01:19:19.146246 extend-filesystems[1441]: Found vda3 Apr 16 01:19:19.146246 extend-filesystems[1441]: Found usr Apr 16 01:19:19.146246 extend-filesystems[1441]: Found vda4 Apr 16 01:19:19.146246 extend-filesystems[1441]: Found vda6 Apr 16 01:19:19.146246 extend-filesystems[1441]: Found vda7 Apr 16 01:19:19.146246 extend-filesystems[1441]: Found vda9 Apr 16 01:19:19.146246 extend-filesystems[1441]: Checking size of /dev/vda9 Apr 16 01:19:19.242587 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1322) Apr 16 01:19:19.261270 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 16 01:19:19.261298 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 16 01:19:19.141943 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 01:19:19.262504 extend-filesystems[1441]: Resized partition /dev/vda9 Apr 16 01:19:19.163903 dbus-daemon[1439]: [system] SELinux support is enabled Apr 16 01:19:19.270297 jq[1440]: false Apr 16 01:19:19.164966 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 01:19:19.270394 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Apr 16 01:19:19.270394 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 01:19:19.270394 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 16 01:19:19.270394 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 16 01:19:19.180243 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 01:19:19.311415 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Apr 16 01:19:19.206066 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 01:19:19.208453 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 01:19:19.231390 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 01:19:19.329175 jq[1463]: true Apr 16 01:19:19.244229 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 01:19:19.329890 update_engine[1462]: I20260416 01:19:19.327832 1462 main.cc:92] Flatcar Update Engine starting Apr 16 01:19:19.253011 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 01:19:19.330103 jq[1467]: true Apr 16 01:19:19.253206 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 01:19:19.253385 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 01:19:19.253663 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 01:19:19.261479 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 01:19:19.262503 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 01:19:19.267640 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 01:19:19.267971 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 01:19:19.296995 systemd-logind[1454]: Watching system buttons on /dev/input/event1 (Power Button) Apr 16 01:19:19.297007 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 01:19:19.297381 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 01:19:19.297400 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 01:19:19.300478 systemd-logind[1454]: New seat seat0. Apr 16 01:19:19.305221 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 01:19:19.305236 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 01:19:19.318004 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 01:19:19.336184 tar[1465]: linux-amd64/LICENSE Apr 16 01:19:19.336184 tar[1465]: linux-amd64/helm Apr 16 01:19:19.337677 update_engine[1462]: I20260416 01:19:19.337517 1462 update_check_scheduler.cc:74] Next update check in 3m28s Apr 16 01:19:19.337859 systemd[1]: Started update-engine.service - Update Engine. Apr 16 01:19:19.345231 (ntainerd)[1479]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 01:19:19.349655 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 01:19:19.379379 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 01:19:19.397990 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Apr 16 01:19:19.399607 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 01:19:19.409063 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 01:19:19.413437 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 01:19:19.429002 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 01:19:19.450162 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 01:19:19.468078 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 01:19:19.468209 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 01:19:19.481063 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 01:19:19.509418 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 01:19:19.526338 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 01:19:19.536359 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 01:19:19.543344 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 01:19:19.605488 containerd[1479]: time="2026-04-16T01:19:19.605285541Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 16 01:19:19.628853 containerd[1479]: time="2026-04-16T01:19:19.628574579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:19:19.632262 containerd[1479]: time="2026-04-16T01:19:19.632078684Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:19:19.632262 containerd[1479]: time="2026-04-16T01:19:19.632156360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 16 01:19:19.632262 containerd[1479]: time="2026-04-16T01:19:19.632169204Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 16 01:19:19.632366 containerd[1479]: time="2026-04-16T01:19:19.632286015Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 16 01:19:19.632366 containerd[1479]: time="2026-04-16T01:19:19.632296910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 16 01:19:19.632366 containerd[1479]: time="2026-04-16T01:19:19.632339645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:19:19.632366 containerd[1479]: time="2026-04-16T01:19:19.632354051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:19:19.632620 containerd[1479]: time="2026-04-16T01:19:19.632472699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:19:19.632620 containerd[1479]: time="2026-04-16T01:19:19.632551036Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 16 01:19:19.632620 containerd[1479]: time="2026-04-16T01:19:19.632562022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:19:19.632620 containerd[1479]: time="2026-04-16T01:19:19.632569989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 16 01:19:19.632620 containerd[1479]: time="2026-04-16T01:19:19.632620903Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:19:19.633074 containerd[1479]: time="2026-04-16T01:19:19.633027762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 16 01:19:19.633230 containerd[1479]: time="2026-04-16T01:19:19.633173732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 01:19:19.633230 containerd[1479]: time="2026-04-16T01:19:19.633185626Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 16 01:19:19.633260 containerd[1479]: time="2026-04-16T01:19:19.633244447Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 16 01:19:19.633300 containerd[1479]: time="2026-04-16T01:19:19.633276372Z" level=info msg="metadata content store policy set" policy=shared Apr 16 01:19:19.640235 containerd[1479]: time="2026-04-16T01:19:19.639942848Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 16 01:19:19.640235 containerd[1479]: time="2026-04-16T01:19:19.640066144Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 16 01:19:19.640235 containerd[1479]: time="2026-04-16T01:19:19.640078898Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 16 01:19:19.640235 containerd[1479]: time="2026-04-16T01:19:19.640089917Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 16 01:19:19.640235 containerd[1479]: time="2026-04-16T01:19:19.640105174Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 16 01:19:19.640235 containerd[1479]: time="2026-04-16T01:19:19.640263572Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 16 01:19:19.640656 containerd[1479]: time="2026-04-16T01:19:19.640487096Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 16 01:19:19.640656 containerd[1479]: time="2026-04-16T01:19:19.640556861Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 16 01:19:19.640656 containerd[1479]: time="2026-04-16T01:19:19.640566813Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 16 01:19:19.640656 containerd[1479]: time="2026-04-16T01:19:19.640576094Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 16 01:19:19.640656 containerd[1479]: time="2026-04-16T01:19:19.640585576Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 16 01:19:19.640656 containerd[1479]: time="2026-04-16T01:19:19.640595565Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 16 01:19:19.640656 containerd[1479]: time="2026-04-16T01:19:19.640605084Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 16 01:19:19.640656 containerd[1479]: time="2026-04-16T01:19:19.640614869Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 16 01:19:19.640656 containerd[1479]: time="2026-04-16T01:19:19.640625054Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 16 01:19:19.640656 containerd[1479]: time="2026-04-16T01:19:19.640634383Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 16 01:19:19.640656 containerd[1479]: time="2026-04-16T01:19:19.640643133Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 16 01:19:19.640656 containerd[1479]: time="2026-04-16T01:19:19.640651718Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 16 01:19:19.640969 containerd[1479]: time="2026-04-16T01:19:19.640667826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.640969 containerd[1479]: time="2026-04-16T01:19:19.640677148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.640969 containerd[1479]: time="2026-04-16T01:19:19.640883859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.640969 containerd[1479]: time="2026-04-16T01:19:19.640895622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.640969 containerd[1479]: time="2026-04-16T01:19:19.640904519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.640969 containerd[1479]: time="2026-04-16T01:19:19.640914143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.640969 containerd[1479]: time="2026-04-16T01:19:19.640931829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.640969 containerd[1479]: time="2026-04-16T01:19:19.640942295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.640969 containerd[1479]: time="2026-04-16T01:19:19.640951115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.640969 containerd[1479]: time="2026-04-16T01:19:19.640965133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.641098 containerd[1479]: time="2026-04-16T01:19:19.640975103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.641098 containerd[1479]: time="2026-04-16T01:19:19.640983240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.641098 containerd[1479]: time="2026-04-16T01:19:19.640992298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.641098 containerd[1479]: time="2026-04-16T01:19:19.641002552Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 16 01:19:19.641098 containerd[1479]: time="2026-04-16T01:19:19.641017347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.641098 containerd[1479]: time="2026-04-16T01:19:19.641026464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.641098 containerd[1479]: time="2026-04-16T01:19:19.641041231Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 16 01:19:19.641183 containerd[1479]: time="2026-04-16T01:19:19.641126031Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 16 01:19:19.641183 containerd[1479]: time="2026-04-16T01:19:19.641139603Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 16 01:19:19.641183 containerd[1479]: time="2026-04-16T01:19:19.641147669Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 16 01:19:19.641183 containerd[1479]: time="2026-04-16T01:19:19.641156676Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 16 01:19:19.641183 containerd[1479]: time="2026-04-16T01:19:19.641163757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.641183 containerd[1479]: time="2026-04-16T01:19:19.641171848Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 16 01:19:19.641183 containerd[1479]: time="2026-04-16T01:19:19.641178759Z" level=info msg="NRI interface is disabled by configuration." Apr 16 01:19:19.641272 containerd[1479]: time="2026-04-16T01:19:19.641185917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 16 01:19:19.642274 containerd[1479]: time="2026-04-16T01:19:19.641391907Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 16 01:19:19.642274 containerd[1479]: time="2026-04-16T01:19:19.641435748Z" level=info msg="Connect containerd service" Apr 16 01:19:19.642274 containerd[1479]: time="2026-04-16T01:19:19.641460437Z" level=info msg="using legacy CRI server" Apr 16 01:19:19.642274 containerd[1479]: time="2026-04-16T01:19:19.641465043Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 01:19:19.642274 containerd[1479]: time="2026-04-16T01:19:19.641534491Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 16 01:19:19.645334 containerd[1479]: time="2026-04-16T01:19:19.643070735Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 01:19:19.645334 containerd[1479]: time="2026-04-16T01:19:19.643401750Z" level=info msg="Start subscribing containerd event" Apr 16 01:19:19.645334 containerd[1479]: time="2026-04-16T01:19:19.643505843Z" level=info msg="Start recovering state" Apr 16 01:19:19.645810 containerd[1479]: time="2026-04-16T01:19:19.645453905Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 01:19:19.645810 containerd[1479]: time="2026-04-16T01:19:19.645499094Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 01:19:19.646189 containerd[1479]: time="2026-04-16T01:19:19.645861153Z" level=info msg="Start event monitor" Apr 16 01:19:19.646189 containerd[1479]: time="2026-04-16T01:19:19.645954905Z" level=info msg="Start snapshots syncer" Apr 16 01:19:19.646189 containerd[1479]: time="2026-04-16T01:19:19.645966927Z" level=info msg="Start cni network conf syncer for default" Apr 16 01:19:19.646189 containerd[1479]: time="2026-04-16T01:19:19.645974500Z" level=info msg="Start streaming server" Apr 16 01:19:19.646189 containerd[1479]: time="2026-04-16T01:19:19.646027603Z" level=info msg="containerd successfully booted in 0.041860s" Apr 16 01:19:19.656228 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 01:19:19.664869 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 01:19:19.682053 systemd[1]: Started sshd@0-10.0.0.84:22-10.0.0.1:59332.service - OpenSSH per-connection server daemon (10.0.0.1:59332). Apr 16 01:19:19.734669 sshd[1527]: Accepted publickey for core from 10.0.0.1 port 59332 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:19:19.737194 sshd[1527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:19:19.748072 systemd-logind[1454]: New session 1 of user core. Apr 16 01:19:19.749235 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 01:19:19.770215 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 01:19:19.785335 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 01:19:19.804230 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 01:19:19.812478 (systemd)[1531]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 01:19:19.861021 tar[1465]: linux-amd64/README.md Apr 16 01:19:19.878155 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 01:19:19.911231 systemd[1531]: Queued start job for default target default.target. Apr 16 01:19:19.920906 systemd[1531]: Created slice app.slice - User Application Slice. Apr 16 01:19:19.920983 systemd[1531]: Reached target paths.target - Paths. Apr 16 01:19:19.920993 systemd[1531]: Reached target timers.target - Timers. Apr 16 01:19:19.924131 systemd[1531]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 01:19:19.939809 systemd[1531]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 01:19:19.940003 systemd[1531]: Reached target sockets.target - Sockets. Apr 16 01:19:19.940069 systemd[1531]: Reached target basic.target - Basic System. Apr 16 01:19:19.940097 systemd[1531]: Reached target default.target - Main User Target. Apr 16 01:19:19.940117 systemd[1531]: Startup finished in 119ms. Apr 16 01:19:19.940327 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 01:19:19.947927 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 01:19:20.023297 systemd[1]: Started sshd@1-10.0.0.84:22-10.0.0.1:53578.service - OpenSSH per-connection server daemon (10.0.0.1:53578). Apr 16 01:19:20.097196 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 53578 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:19:20.099443 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:19:20.105387 systemd-logind[1454]: New session 2 of user core. Apr 16 01:19:20.116813 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 01:19:20.184145 sshd[1545]: pam_unix(sshd:session): session closed for user core Apr 16 01:19:20.206272 systemd[1]: sshd@1-10.0.0.84:22-10.0.0.1:53578.service: Deactivated successfully. Apr 16 01:19:20.207660 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 01:19:20.211318 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Apr 16 01:19:20.221077 systemd[1]: Started sshd@2-10.0.0.84:22-10.0.0.1:53588.service - OpenSSH per-connection server daemon (10.0.0.1:53588). Apr 16 01:19:20.229293 systemd-logind[1454]: Removed session 2. Apr 16 01:19:20.262620 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 53588 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:19:20.264431 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:19:20.269867 systemd-logind[1454]: New session 3 of user core. Apr 16 01:19:20.280005 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 01:19:20.352348 sshd[1552]: pam_unix(sshd:session): session closed for user core Apr 16 01:19:20.355606 systemd[1]: sshd@2-10.0.0.84:22-10.0.0.1:53588.service: Deactivated successfully. Apr 16 01:19:20.357372 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 01:19:20.359221 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Apr 16 01:19:20.360952 systemd-logind[1454]: Removed session 3. Apr 16 01:19:20.872294 systemd-networkd[1385]: eth0: Gained IPv6LL Apr 16 01:19:20.876156 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 01:19:20.883605 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 01:19:20.900160 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 16 01:19:20.907917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:19:20.915048 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 01:19:20.947407 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 16 01:19:20.947612 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 16 01:19:20.954529 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 01:19:20.958355 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 01:19:21.933404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:19:21.940278 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 01:19:21.946102 systemd[1]: Startup finished in 13.608s (kernel) + 11.940s (initrd) + 6.488s (userspace) = 32.038s. Apr 16 01:19:21.951514 (kubelet)[1580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:19:22.621415 kubelet[1580]: E0416 01:19:22.621150 1580 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:19:22.624540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:19:22.625012 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:19:22.625502 systemd[1]: kubelet.service: Consumed 1.084s CPU time. Apr 16 01:19:30.436597 systemd[1]: Started sshd@3-10.0.0.84:22-10.0.0.1:50644.service - OpenSSH per-connection server daemon (10.0.0.1:50644). Apr 16 01:19:30.482558 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 50644 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:19:30.484399 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:19:30.490655 systemd-logind[1454]: New session 4 of user core. Apr 16 01:19:30.499956 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 01:19:30.569435 sshd[1593]: pam_unix(sshd:session): session closed for user core Apr 16 01:19:30.576121 systemd[1]: sshd@3-10.0.0.84:22-10.0.0.1:50644.service: Deactivated successfully. Apr 16 01:19:30.577966 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 01:19:30.579974 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Apr 16 01:19:30.593144 systemd[1]: Started sshd@4-10.0.0.84:22-10.0.0.1:50650.service - OpenSSH per-connection server daemon (10.0.0.1:50650). Apr 16 01:19:30.594590 systemd-logind[1454]: Removed session 4. Apr 16 01:19:30.638026 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 50650 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:19:30.639861 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:19:30.648232 systemd-logind[1454]: New session 5 of user core. Apr 16 01:19:30.663457 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 01:19:30.718120 sshd[1600]: pam_unix(sshd:session): session closed for user core Apr 16 01:19:30.726101 systemd[1]: sshd@4-10.0.0.84:22-10.0.0.1:50650.service: Deactivated successfully. Apr 16 01:19:30.727659 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 01:19:30.729616 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Apr 16 01:19:30.731516 systemd[1]: Started sshd@5-10.0.0.84:22-10.0.0.1:50656.service - OpenSSH per-connection server daemon (10.0.0.1:50656). Apr 16 01:19:30.734351 systemd-logind[1454]: Removed session 5. Apr 16 01:19:30.802657 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 50656 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:19:30.804350 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:19:30.814500 systemd-logind[1454]: New session 6 of user core. Apr 16 01:19:30.824059 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 01:19:30.889008 sshd[1607]: pam_unix(sshd:session): session closed for user core Apr 16 01:19:30.899473 systemd[1]: sshd@5-10.0.0.84:22-10.0.0.1:50656.service: Deactivated successfully. Apr 16 01:19:30.901264 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 01:19:30.903604 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Apr 16 01:19:30.913134 systemd[1]: Started sshd@6-10.0.0.84:22-10.0.0.1:50662.service - OpenSSH per-connection server daemon (10.0.0.1:50662). Apr 16 01:19:30.914090 systemd-logind[1454]: Removed session 6. Apr 16 01:19:30.956144 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 50662 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:19:30.957874 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:19:30.964120 systemd-logind[1454]: New session 7 of user core. Apr 16 01:19:30.973988 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 01:19:31.055127 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 01:19:31.055412 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 01:19:31.076368 sudo[1617]: pam_unix(sudo:session): session closed for user root Apr 16 01:19:31.080812 sshd[1614]: pam_unix(sshd:session): session closed for user core Apr 16 01:19:31.090511 systemd[1]: sshd@6-10.0.0.84:22-10.0.0.1:50662.service: Deactivated successfully. Apr 16 01:19:31.092133 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 01:19:31.093860 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Apr 16 01:19:31.103490 systemd[1]: Started sshd@7-10.0.0.84:22-10.0.0.1:50676.service - OpenSSH per-connection server daemon (10.0.0.1:50676). Apr 16 01:19:31.105307 systemd-logind[1454]: Removed session 7. Apr 16 01:19:31.153308 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 50676 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:19:31.154667 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:19:31.162432 systemd-logind[1454]: New session 8 of user core. Apr 16 01:19:31.176000 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 01:19:31.238144 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 01:19:31.238512 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 01:19:31.245475 sudo[1626]: pam_unix(sudo:session): session closed for user root Apr 16 01:19:31.252929 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 16 01:19:31.253190 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 01:19:31.274134 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 16 01:19:31.278089 auditctl[1629]: No rules Apr 16 01:19:31.278352 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 01:19:31.278791 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 16 01:19:31.281786 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 01:19:31.347884 augenrules[1647]: No rules Apr 16 01:19:31.350411 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 01:19:31.352501 sudo[1625]: pam_unix(sudo:session): session closed for user root Apr 16 01:19:31.354818 sshd[1622]: pam_unix(sshd:session): session closed for user core Apr 16 01:19:31.366076 systemd[1]: sshd@7-10.0.0.84:22-10.0.0.1:50676.service: Deactivated successfully. Apr 16 01:19:31.367598 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 01:19:31.370312 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Apr 16 01:19:31.384088 systemd[1]: Started sshd@8-10.0.0.84:22-10.0.0.1:50680.service - OpenSSH per-connection server daemon (10.0.0.1:50680). Apr 16 01:19:31.385607 systemd-logind[1454]: Removed session 8. Apr 16 01:19:31.423475 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 50680 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:19:31.425349 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:19:31.431639 systemd-logind[1454]: New session 9 of user core. Apr 16 01:19:31.443091 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 01:19:31.503493 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 01:19:31.503895 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 01:19:31.886353 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 01:19:31.886395 (dockerd)[1676]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 01:19:32.263855 dockerd[1676]: time="2026-04-16T01:19:32.263421624Z" level=info msg="Starting up" Apr 16 01:19:32.431160 dockerd[1676]: time="2026-04-16T01:19:32.430932743Z" level=info msg="Loading containers: start." Apr 16 01:19:32.695916 kernel: Initializing XFRM netlink socket Apr 16 01:19:32.744473 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 01:19:32.754249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:19:32.887523 systemd-networkd[1385]: docker0: Link UP Apr 16 01:19:32.938137 kernel: hrtimer: interrupt took 3053114 ns Apr 16 01:19:32.999636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:19:33.003359 (kubelet)[1787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:19:33.008451 dockerd[1676]: time="2026-04-16T01:19:33.008349485Z" level=info msg="Loading containers: done." Apr 16 01:19:33.094621 dockerd[1676]: time="2026-04-16T01:19:33.093570641Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 01:19:33.094621 dockerd[1676]: time="2026-04-16T01:19:33.094326914Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 16 01:19:33.094621 dockerd[1676]: time="2026-04-16T01:19:33.094568955Z" level=info msg="Daemon has completed initialization" Apr 16 01:19:33.285897 kubelet[1787]: E0416 01:19:33.260009 1787 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:19:33.286183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:19:33.286301 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:19:33.341353 dockerd[1676]: time="2026-04-16T01:19:33.340474597Z" level=info msg="API listen on /run/docker.sock" Apr 16 01:19:33.342456 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 01:19:36.235219 containerd[1479]: time="2026-04-16T01:19:36.233254944Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 16 01:19:37.030184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482167533.mount: Deactivated successfully. Apr 16 01:19:38.806068 containerd[1479]: time="2026-04-16T01:19:38.805907072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:38.807435 containerd[1479]: time="2026-04-16T01:19:38.807335168Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 16 01:19:38.808893 containerd[1479]: time="2026-04-16T01:19:38.808601685Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:38.812425 containerd[1479]: time="2026-04-16T01:19:38.812343449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:38.812928 containerd[1479]: time="2026-04-16T01:19:38.812899438Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 2.579386217s" Apr 16 01:19:38.813234 containerd[1479]: time="2026-04-16T01:19:38.813019697Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 16 01:19:38.814488 containerd[1479]: time="2026-04-16T01:19:38.814367427Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 16 01:19:40.263031 containerd[1479]: time="2026-04-16T01:19:40.262633881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:40.264119 containerd[1479]: time="2026-04-16T01:19:40.264004939Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 16 01:19:40.265637 containerd[1479]: time="2026-04-16T01:19:40.265481275Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:40.269090 containerd[1479]: time="2026-04-16T01:19:40.269052616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:40.270558 containerd[1479]: time="2026-04-16T01:19:40.270143564Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 1.455680686s" Apr 16 01:19:40.270558 containerd[1479]: time="2026-04-16T01:19:40.270227843Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 16 01:19:40.271429 containerd[1479]: time="2026-04-16T01:19:40.271139950Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 16 01:19:41.379785 containerd[1479]: time="2026-04-16T01:19:41.379277270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:41.381290 containerd[1479]: time="2026-04-16T01:19:41.381211978Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 16 01:19:41.382908 containerd[1479]: time="2026-04-16T01:19:41.382615077Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:41.385513 containerd[1479]: time="2026-04-16T01:19:41.385307096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:41.389036 containerd[1479]: time="2026-04-16T01:19:41.388866394Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 1.117536059s" Apr 16 01:19:41.389036 containerd[1479]: time="2026-04-16T01:19:41.388953520Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 16 01:19:41.390930 containerd[1479]: time="2026-04-16T01:19:41.390852300Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 16 01:19:42.477241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165542824.mount: Deactivated successfully. Apr 16 01:19:43.055315 containerd[1479]: time="2026-04-16T01:19:43.054800739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:43.055315 containerd[1479]: time="2026-04-16T01:19:43.055220568Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 16 01:19:43.057263 containerd[1479]: time="2026-04-16T01:19:43.057016488Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:43.060995 containerd[1479]: time="2026-04-16T01:19:43.060864882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:43.062084 containerd[1479]: time="2026-04-16T01:19:43.061986029Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 1.670696695s" Apr 16 01:19:43.062084 containerd[1479]: time="2026-04-16T01:19:43.062068781Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 16 01:19:43.063610 containerd[1479]: time="2026-04-16T01:19:43.063244929Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 16 01:19:43.342444 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 01:19:43.373566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:19:43.620021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3749059174.mount: Deactivated successfully. Apr 16 01:19:43.627466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:19:43.634540 (kubelet)[1928]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 01:19:43.750231 kubelet[1928]: E0416 01:19:43.750110 1928 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 01:19:43.759184 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 01:19:43.759361 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 01:19:45.413067 containerd[1479]: time="2026-04-16T01:19:45.412476262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:45.415051 containerd[1479]: time="2026-04-16T01:19:45.414811605Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 16 01:19:45.418110 containerd[1479]: time="2026-04-16T01:19:45.417464936Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:45.422920 containerd[1479]: time="2026-04-16T01:19:45.422553782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:45.423648 containerd[1479]: time="2026-04-16T01:19:45.423501955Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 2.360003661s" Apr 16 01:19:45.423648 containerd[1479]: time="2026-04-16T01:19:45.423587646Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 16 01:19:45.425399 containerd[1479]: time="2026-04-16T01:19:45.425308126Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 01:19:45.868016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1543560794.mount: Deactivated successfully. Apr 16 01:19:45.882613 containerd[1479]: time="2026-04-16T01:19:45.882311119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:45.884371 containerd[1479]: time="2026-04-16T01:19:45.884193366Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 16 01:19:45.885824 containerd[1479]: time="2026-04-16T01:19:45.885495662Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:45.888915 containerd[1479]: time="2026-04-16T01:19:45.888776320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:45.889664 containerd[1479]: time="2026-04-16T01:19:45.889574283Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 464.174971ms" Apr 16 01:19:45.889664 containerd[1479]: time="2026-04-16T01:19:45.889649784Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 16 01:19:45.891140 containerd[1479]: time="2026-04-16T01:19:45.891048407Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 16 01:19:46.390980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1038306816.mount: Deactivated successfully. Apr 16 01:19:48.087284 containerd[1479]: time="2026-04-16T01:19:48.086974943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:48.088654 containerd[1479]: time="2026-04-16T01:19:48.088512806Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 16 01:19:48.089850 containerd[1479]: time="2026-04-16T01:19:48.089590547Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:48.096008 containerd[1479]: time="2026-04-16T01:19:48.095606401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:19:48.096605 containerd[1479]: time="2026-04-16T01:19:48.096456312Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 2.205323403s" Apr 16 01:19:48.096605 containerd[1479]: time="2026-04-16T01:19:48.096483530Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 16 01:19:49.684343 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:19:49.698289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:19:49.749351 systemd[1]: Reloading requested from client PID 2086 ('systemctl') (unit session-9.scope)... Apr 16 01:19:49.749521 systemd[1]: Reloading... Apr 16 01:19:49.896185 zram_generator::config[2122]: No configuration found. Apr 16 01:19:50.073455 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:19:50.175583 systemd[1]: Reloading finished in 424 ms. Apr 16 01:19:50.303510 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:19:50.305886 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 01:19:50.306276 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:19:50.309627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:19:50.560502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:19:50.576152 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 01:19:50.700522 kubelet[2175]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 01:19:50.858321 kubelet[2175]: I0416 01:19:50.858029 2175 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 16 01:19:50.858321 kubelet[2175]: I0416 01:19:50.858150 2175 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 01:19:50.858321 kubelet[2175]: I0416 01:19:50.858170 2175 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 01:19:50.858321 kubelet[2175]: I0416 01:19:50.858176 2175 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 01:19:50.858886 kubelet[2175]: I0416 01:19:50.858444 2175 server.go:951] "Client rotation is on, will bootstrap in background" Apr 16 01:19:50.995133 kubelet[2175]: E0416 01:19:50.994902 2175 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 01:19:50.997120 kubelet[2175]: I0416 01:19:50.996982 2175 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 01:19:51.008152 kubelet[2175]: E0416 01:19:51.008062 2175 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 01:19:51.008259 kubelet[2175]: I0416 01:19:51.008172 2175 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 16 01:19:51.021479 kubelet[2175]: I0416 01:19:51.020864 2175 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 01:19:51.024838 kubelet[2175]: I0416 01:19:51.024207 2175 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 01:19:51.025131 kubelet[2175]: I0416 01:19:51.024664 2175 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 01:19:51.025131 kubelet[2175]: I0416 01:19:51.024958 2175 topology_manager.go:143] "Creating topology manager with none policy" Apr 16 01:19:51.025131 kubelet[2175]: I0416 01:19:51.024965 2175 container_manager_linux.go:308] "Creating device plugin manager" Apr 16 01:19:51.025131 kubelet[2175]: I0416 01:19:51.025053 2175 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 01:19:51.030268 kubelet[2175]: I0416 01:19:51.030091 2175 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 16 01:19:51.031179 kubelet[2175]: I0416 01:19:51.030994 2175 kubelet.go:482] "Attempting to sync node with API server" Apr 16 01:19:51.031179 kubelet[2175]: I0416 01:19:51.031069 2175 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 01:19:51.031179 kubelet[2175]: I0416 01:19:51.031089 2175 kubelet.go:394] "Adding apiserver pod source" Apr 16 01:19:51.031179 kubelet[2175]: I0416 01:19:51.031097 2175 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 01:19:51.040166 kubelet[2175]: I0416 01:19:51.039384 2175 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 01:19:51.051272 kubelet[2175]: I0416 01:19:51.048230 2175 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 01:19:51.051272 kubelet[2175]: I0416 01:19:51.048265 2175 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 01:19:51.051272 kubelet[2175]: W0416 01:19:51.048378 2175 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 01:19:51.061077 kubelet[2175]: I0416 01:19:51.060976 2175 server.go:1257] "Started kubelet" Apr 16 01:19:51.062970 kubelet[2175]: I0416 01:19:51.062907 2175 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 01:19:51.065360 kubelet[2175]: I0416 01:19:51.064628 2175 server.go:317] "Adding debug handlers to kubelet server" Apr 16 01:19:51.071392 kubelet[2175]: I0416 01:19:51.069897 2175 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 01:19:51.071392 kubelet[2175]: I0416 01:19:51.069961 2175 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 01:19:51.071392 kubelet[2175]: I0416 01:19:51.070448 2175 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 01:19:51.074141 kubelet[2175]: I0416 01:19:51.073425 2175 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 16 01:19:51.074378 kubelet[2175]: I0416 01:19:51.074360 2175 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 01:19:51.079237 kubelet[2175]: I0416 01:19:51.079044 2175 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 16 01:19:51.079237 kubelet[2175]: E0416 01:19:51.079231 2175 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:19:51.079431 kubelet[2175]: I0416 01:19:51.079414 2175 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 01:19:51.079462 kubelet[2175]: I0416 01:19:51.079441 2175 reconciler.go:29] "Reconciler: start to sync state" Apr 16 01:19:51.080133 kubelet[2175]: E0416 01:19:51.078395 2175 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b19d08dc1536 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 01:19:51.060862262 +0000 UTC m=+0.475062586,LastTimestamp:2026-04-16 01:19:51.060862262 +0000 UTC m=+0.475062586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 01:19:51.080337 kubelet[2175]: E0416 01:19:51.080173 2175 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="200ms" Apr 16 01:19:51.081957 kubelet[2175]: I0416 01:19:51.081603 2175 factory.go:223] Registration of the systemd container factory successfully Apr 16 01:19:51.081957 kubelet[2175]: I0416 01:19:51.081937 2175 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 01:19:51.082639 kubelet[2175]: E0416 01:19:51.082592 2175 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 01:19:51.084113 kubelet[2175]: I0416 01:19:51.084068 2175 factory.go:223] Registration of the containerd container factory successfully Apr 16 01:19:51.124154 kubelet[2175]: I0416 01:19:51.121107 2175 cpu_manager.go:225] "Starting" policy="none" Apr 16 01:19:51.124154 kubelet[2175]: I0416 01:19:51.121118 2175 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 16 01:19:51.124154 kubelet[2175]: I0416 01:19:51.121131 2175 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 16 01:19:51.127053 kubelet[2175]: I0416 01:19:51.126957 2175 policy_none.go:50] "Start" Apr 16 01:19:51.127053 kubelet[2175]: I0416 01:19:51.127048 2175 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 01:19:51.127053 kubelet[2175]: I0416 01:19:51.127061 2175 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 01:19:51.130255 kubelet[2175]: I0416 01:19:51.130151 2175 policy_none.go:44] "Start" Apr 16 01:19:51.145445 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 01:19:51.173960 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 01:19:51.180376 kubelet[2175]: E0416 01:19:51.180073 2175 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 01:19:51.189963 kubelet[2175]: I0416 01:19:51.189826 2175 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 01:19:51.192282 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 01:19:51.193968 kubelet[2175]: I0416 01:19:51.193940 2175 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 01:19:51.193968 kubelet[2175]: I0416 01:19:51.193967 2175 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 16 01:19:51.194192 kubelet[2175]: I0416 01:19:51.193994 2175 kubelet.go:2501] "Starting kubelet main sync loop" Apr 16 01:19:51.194192 kubelet[2175]: E0416 01:19:51.194047 2175 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 01:19:51.197815 kubelet[2175]: E0416 01:19:51.195392 2175 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 01:19:51.197815 kubelet[2175]: I0416 01:19:51.195821 2175 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 16 01:19:51.197815 kubelet[2175]: I0416 01:19:51.195832 2175 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 01:19:51.197815 kubelet[2175]: I0416 01:19:51.196066 2175 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 16 01:19:51.198925 kubelet[2175]: E0416 01:19:51.198438 2175 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 01:19:51.198925 kubelet[2175]: E0416 01:19:51.198640 2175 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 01:19:51.281930 kubelet[2175]: E0416 01:19:51.281480 2175 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="400ms" Apr 16 01:19:51.299660 kubelet[2175]: I0416 01:19:51.299232 2175 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 01:19:51.299660 kubelet[2175]: E0416 01:19:51.299806 2175 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Apr 16 01:19:51.322917 systemd[1]: Created slice kubepods-burstable-pod28614f0b25f8a1b92edf831048d5d707.slice - libcontainer container kubepods-burstable-pod28614f0b25f8a1b92edf831048d5d707.slice. Apr 16 01:19:51.353641 kubelet[2175]: E0416 01:19:51.351518 2175 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:19:51.359151 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 16 01:19:51.362341 kubelet[2175]: E0416 01:19:51.362248 2175 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:19:51.366914 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 16 01:19:51.370924 kubelet[2175]: E0416 01:19:51.370376 2175 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:19:51.383195 kubelet[2175]: I0416 01:19:51.380804 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:19:51.383195 kubelet[2175]: I0416 01:19:51.381101 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:19:51.383195 kubelet[2175]: I0416 01:19:51.381117 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:19:51.383195 kubelet[2175]: I0416 01:19:51.381130 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:19:51.383195 kubelet[2175]: I0416 01:19:51.381142 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 16 01:19:51.383437 kubelet[2175]: I0416 01:19:51.381154 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28614f0b25f8a1b92edf831048d5d707-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"28614f0b25f8a1b92edf831048d5d707\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:19:51.383437 kubelet[2175]: I0416 01:19:51.381166 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28614f0b25f8a1b92edf831048d5d707-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"28614f0b25f8a1b92edf831048d5d707\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:19:51.383437 kubelet[2175]: I0416 01:19:51.381178 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:19:51.383437 kubelet[2175]: I0416 01:19:51.381190 2175 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28614f0b25f8a1b92edf831048d5d707-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"28614f0b25f8a1b92edf831048d5d707\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:19:51.505921 kubelet[2175]: I0416 01:19:51.505286 2175 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 01:19:51.506947 kubelet[2175]: E0416 01:19:51.506840 2175 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Apr 16 01:19:51.666239 kubelet[2175]: E0416 01:19:51.663948 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:51.666429 containerd[1479]: time="2026-04-16T01:19:51.665613636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:28614f0b25f8a1b92edf831048d5d707,Namespace:kube-system,Attempt:0,}" Apr 16 01:19:51.672181 kubelet[2175]: E0416 01:19:51.671482 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:51.673392 containerd[1479]: time="2026-04-16T01:19:51.672482165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 16 01:19:51.679141 kubelet[2175]: E0416 01:19:51.678975 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:51.681223 containerd[1479]: time="2026-04-16T01:19:51.681048744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 16 01:19:51.683245 kubelet[2175]: E0416 01:19:51.683104 2175 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="800ms" Apr 16 01:19:51.909397 kubelet[2175]: I0416 01:19:51.908987 2175 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 01:19:51.910092 kubelet[2175]: E0416 01:19:51.909593 2175 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Apr 16 01:19:52.118591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089143826.mount: Deactivated successfully. Apr 16 01:19:52.136284 containerd[1479]: time="2026-04-16T01:19:52.134239307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:19:52.141363 containerd[1479]: time="2026-04-16T01:19:52.141329810Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 16 01:19:52.144453 containerd[1479]: time="2026-04-16T01:19:52.143977529Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:19:52.146045 containerd[1479]: time="2026-04-16T01:19:52.145953188Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 01:19:52.149308 containerd[1479]: time="2026-04-16T01:19:52.148957547Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:19:52.150952 containerd[1479]: time="2026-04-16T01:19:52.150431892Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:19:52.153517 containerd[1479]: time="2026-04-16T01:19:52.153300450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 01:19:52.155172 containerd[1479]: time="2026-04-16T01:19:52.155125870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 01:19:52.156345 containerd[1479]: time="2026-04-16T01:19:52.156125171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 490.185884ms" Apr 16 01:19:52.158243 containerd[1479]: time="2026-04-16T01:19:52.158106714Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 477.001064ms" Apr 16 01:19:52.163162 containerd[1479]: time="2026-04-16T01:19:52.162556661Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 489.953182ms" Apr 16 01:19:52.345895 containerd[1479]: time="2026-04-16T01:19:52.344970033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:19:52.345895 containerd[1479]: time="2026-04-16T01:19:52.345022709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:19:52.345895 containerd[1479]: time="2026-04-16T01:19:52.345036219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:19:52.345895 containerd[1479]: time="2026-04-16T01:19:52.345094071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:19:52.350531 containerd[1479]: time="2026-04-16T01:19:52.349372175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:19:52.350531 containerd[1479]: time="2026-04-16T01:19:52.349411958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:19:52.350531 containerd[1479]: time="2026-04-16T01:19:52.349420399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:19:52.350531 containerd[1479]: time="2026-04-16T01:19:52.349466533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:19:52.362150 containerd[1479]: time="2026-04-16T01:19:52.357501939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:19:52.362150 containerd[1479]: time="2026-04-16T01:19:52.357581929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:19:52.362150 containerd[1479]: time="2026-04-16T01:19:52.357602071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:19:52.362150 containerd[1479]: time="2026-04-16T01:19:52.358170962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:19:52.400200 systemd[1]: Started cri-containerd-e6915caf01ae99729df3adc3a9cf26cd63ed7a83265c04087bd4b183e3b3eed0.scope - libcontainer container e6915caf01ae99729df3adc3a9cf26cd63ed7a83265c04087bd4b183e3b3eed0. Apr 16 01:19:52.420597 systemd[1]: Started cri-containerd-fec95dd751cc03570d866024482dbb2772ec59bc4ebb003faaa7bf76191d1a6c.scope - libcontainer container fec95dd751cc03570d866024482dbb2772ec59bc4ebb003faaa7bf76191d1a6c. Apr 16 01:19:52.427402 systemd[1]: Started cri-containerd-c1997f648f7e8000204bb6d59fb5e261a3835d176dc072e82009b141449e0c0d.scope - libcontainer container c1997f648f7e8000204bb6d59fb5e261a3835d176dc072e82009b141449e0c0d. Apr 16 01:19:52.484161 kubelet[2175]: E0416 01:19:52.483995 2175 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="1.6s" Apr 16 01:19:52.515403 containerd[1479]: time="2026-04-16T01:19:52.511535626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:28614f0b25f8a1b92edf831048d5d707,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6915caf01ae99729df3adc3a9cf26cd63ed7a83265c04087bd4b183e3b3eed0\"" Apr 16 01:19:52.521762 kubelet[2175]: E0416 01:19:52.519079 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:52.530915 containerd[1479]: time="2026-04-16T01:19:52.530489186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1997f648f7e8000204bb6d59fb5e261a3835d176dc072e82009b141449e0c0d\"" Apr 16 01:19:52.532628 containerd[1479]: time="2026-04-16T01:19:52.532601048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"fec95dd751cc03570d866024482dbb2772ec59bc4ebb003faaa7bf76191d1a6c\"" Apr 16 01:19:52.534432 kubelet[2175]: E0416 01:19:52.534300 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:52.534503 kubelet[2175]: E0416 01:19:52.534466 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:52.536485 containerd[1479]: time="2026-04-16T01:19:52.536316303Z" level=info msg="CreateContainer within sandbox \"e6915caf01ae99729df3adc3a9cf26cd63ed7a83265c04087bd4b183e3b3eed0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 01:19:52.542626 containerd[1479]: time="2026-04-16T01:19:52.542597973Z" level=info msg="CreateContainer within sandbox \"c1997f648f7e8000204bb6d59fb5e261a3835d176dc072e82009b141449e0c0d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 01:19:52.546588 containerd[1479]: time="2026-04-16T01:19:52.545982040Z" level=info msg="CreateContainer within sandbox \"fec95dd751cc03570d866024482dbb2772ec59bc4ebb003faaa7bf76191d1a6c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 01:19:52.579389 containerd[1479]: time="2026-04-16T01:19:52.579059168Z" level=info msg="CreateContainer within sandbox \"e6915caf01ae99729df3adc3a9cf26cd63ed7a83265c04087bd4b183e3b3eed0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2abdbc0fe2640b0e92c295227a005c80184a6359305d25fab77fe1cec1478cf8\"" Apr 16 01:19:52.581162 containerd[1479]: time="2026-04-16T01:19:52.580432505Z" level=info msg="StartContainer for \"2abdbc0fe2640b0e92c295227a005c80184a6359305d25fab77fe1cec1478cf8\"" Apr 16 01:19:52.590339 containerd[1479]: time="2026-04-16T01:19:52.588630783Z" level=info msg="CreateContainer within sandbox \"c1997f648f7e8000204bb6d59fb5e261a3835d176dc072e82009b141449e0c0d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"68f9d4966f93b31e7910143ab0c47cd1da512fb6b1d4a3e57737379d77ba57cb\"" Apr 16 01:19:52.590339 containerd[1479]: time="2026-04-16T01:19:52.589657259Z" level=info msg="StartContainer for \"68f9d4966f93b31e7910143ab0c47cd1da512fb6b1d4a3e57737379d77ba57cb\"" Apr 16 01:19:52.600630 containerd[1479]: time="2026-04-16T01:19:52.599983602Z" level=info msg="CreateContainer within sandbox \"fec95dd751cc03570d866024482dbb2772ec59bc4ebb003faaa7bf76191d1a6c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a60df944a1bbca0cb0f3531e3d49d862be74e27c5fcb1fef7b409bcd97e5a5ca\"" Apr 16 01:19:52.600630 containerd[1479]: time="2026-04-16T01:19:52.600648550Z" level=info msg="StartContainer for \"a60df944a1bbca0cb0f3531e3d49d862be74e27c5fcb1fef7b409bcd97e5a5ca\"" Apr 16 01:19:52.649883 systemd[1]: Started cri-containerd-2abdbc0fe2640b0e92c295227a005c80184a6359305d25fab77fe1cec1478cf8.scope - libcontainer container 2abdbc0fe2640b0e92c295227a005c80184a6359305d25fab77fe1cec1478cf8. Apr 16 01:19:52.681534 systemd[1]: Started cri-containerd-68f9d4966f93b31e7910143ab0c47cd1da512fb6b1d4a3e57737379d77ba57cb.scope - libcontainer container 68f9d4966f93b31e7910143ab0c47cd1da512fb6b1d4a3e57737379d77ba57cb. Apr 16 01:19:52.713046 kubelet[2175]: I0416 01:19:52.712337 2175 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 01:19:52.713046 kubelet[2175]: E0416 01:19:52.712666 2175 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Apr 16 01:19:52.712617 systemd[1]: Started cri-containerd-a60df944a1bbca0cb0f3531e3d49d862be74e27c5fcb1fef7b409bcd97e5a5ca.scope - libcontainer container a60df944a1bbca0cb0f3531e3d49d862be74e27c5fcb1fef7b409bcd97e5a5ca. Apr 16 01:19:52.752047 containerd[1479]: time="2026-04-16T01:19:52.751995999Z" level=info msg="StartContainer for \"2abdbc0fe2640b0e92c295227a005c80184a6359305d25fab77fe1cec1478cf8\" returns successfully" Apr 16 01:19:52.825057 containerd[1479]: time="2026-04-16T01:19:52.821150303Z" level=info msg="StartContainer for \"68f9d4966f93b31e7910143ab0c47cd1da512fb6b1d4a3e57737379d77ba57cb\" returns successfully" Apr 16 01:19:52.840554 containerd[1479]: time="2026-04-16T01:19:52.840290047Z" level=info msg="StartContainer for \"a60df944a1bbca0cb0f3531e3d49d862be74e27c5fcb1fef7b409bcd97e5a5ca\" returns successfully" Apr 16 01:19:53.216151 kubelet[2175]: E0416 01:19:53.215635 2175 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:19:53.216151 kubelet[2175]: E0416 01:19:53.216241 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:53.224516 kubelet[2175]: E0416 01:19:53.223168 2175 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:19:53.224516 kubelet[2175]: E0416 01:19:53.223357 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:53.237843 kubelet[2175]: E0416 01:19:53.236110 2175 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:19:53.237843 kubelet[2175]: E0416 01:19:53.236200 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:54.235361 kubelet[2175]: E0416 01:19:54.235199 2175 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:19:54.235361 kubelet[2175]: E0416 01:19:54.235354 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:54.236433 kubelet[2175]: E0416 01:19:54.236293 2175 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 01:19:54.236433 kubelet[2175]: E0416 01:19:54.236358 2175 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:54.321932 kubelet[2175]: I0416 01:19:54.317579 2175 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 01:19:54.919416 kubelet[2175]: E0416 01:19:54.918836 2175 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 16 01:19:55.017517 kubelet[2175]: I0416 01:19:55.016900 2175 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 16 01:19:55.037593 kubelet[2175]: I0416 01:19:55.036365 2175 apiserver.go:52] "Watching apiserver" Apr 16 01:19:55.080865 kubelet[2175]: I0416 01:19:55.080399 2175 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 01:19:55.080865 kubelet[2175]: I0416 01:19:55.080505 2175 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 01:19:55.092648 kubelet[2175]: E0416 01:19:55.091441 2175 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 16 01:19:55.092648 kubelet[2175]: I0416 01:19:55.091576 2175 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 01:19:55.096912 kubelet[2175]: E0416 01:19:55.096884 2175 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 16 01:19:55.097309 kubelet[2175]: I0416 01:19:55.097017 2175 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 01:19:55.100640 kubelet[2175]: E0416 01:19:55.100373 2175 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 16 01:19:57.563082 systemd[1]: Reloading requested from client PID 2464 ('systemctl') (unit session-9.scope)... Apr 16 01:19:57.563168 systemd[1]: Reloading... Apr 16 01:19:57.670861 zram_generator::config[2503]: No configuration found. Apr 16 01:19:57.845234 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 01:19:57.940049 systemd[1]: Reloading finished in 376 ms. Apr 16 01:19:58.024076 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:19:58.042993 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 01:19:58.043319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:19:58.043520 systemd[1]: kubelet.service: Consumed 1.460s CPU time, 129.0M memory peak, 0B memory swap peak. Apr 16 01:19:58.055149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 01:19:58.279888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 01:19:58.294299 (kubelet)[2548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 01:19:58.462343 kubelet[2548]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 01:19:58.482574 kubelet[2548]: I0416 01:19:58.482293 2548 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 16 01:19:58.482966 kubelet[2548]: I0416 01:19:58.482937 2548 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 01:19:58.482966 kubelet[2548]: I0416 01:19:58.482954 2548 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 01:19:58.482966 kubelet[2548]: I0416 01:19:58.482958 2548 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 01:19:58.483253 kubelet[2548]: I0416 01:19:58.483137 2548 server.go:951] "Client rotation is on, will bootstrap in background" Apr 16 01:19:58.484912 kubelet[2548]: I0416 01:19:58.484821 2548 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 01:19:58.488994 kubelet[2548]: I0416 01:19:58.488902 2548 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 01:19:58.500041 kubelet[2548]: E0416 01:19:58.497657 2548 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 01:19:58.500041 kubelet[2548]: I0416 01:19:58.497961 2548 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 16 01:19:58.521550 kubelet[2548]: I0416 01:19:58.520404 2548 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 01:19:58.522065 kubelet[2548]: I0416 01:19:58.521975 2548 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 01:19:58.524929 kubelet[2548]: I0416 01:19:58.522059 2548 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 01:19:58.526791 kubelet[2548]: I0416 01:19:58.526194 2548 topology_manager.go:143] "Creating topology manager with none policy" Apr 16 01:19:58.526791 kubelet[2548]: I0416 01:19:58.526294 2548 container_manager_linux.go:308] "Creating device plugin manager" Apr 16 01:19:58.526791 kubelet[2548]: I0416 01:19:58.526325 2548 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 01:19:58.536159 kubelet[2548]: I0416 01:19:58.533079 2548 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 16 01:19:58.538410 kubelet[2548]: I0416 01:19:58.538106 2548 kubelet.go:482] "Attempting to sync node with API server" Apr 16 01:19:58.538410 kubelet[2548]: I0416 01:19:58.538123 2548 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 01:19:58.538410 kubelet[2548]: I0416 01:19:58.538141 2548 kubelet.go:394] "Adding apiserver pod source" Apr 16 01:19:58.538410 kubelet[2548]: I0416 01:19:58.538178 2548 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 01:19:58.558012 kubelet[2548]: I0416 01:19:58.557991 2548 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 01:19:58.564205 kubelet[2548]: I0416 01:19:58.561091 2548 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 01:19:58.564205 kubelet[2548]: I0416 01:19:58.561117 2548 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 01:19:58.588328 kubelet[2548]: I0416 01:19:58.588288 2548 server.go:1257] "Started kubelet" Apr 16 01:19:58.598877 kubelet[2548]: I0416 01:19:58.598366 2548 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 01:19:58.603613 kubelet[2548]: I0416 01:19:58.601450 2548 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 16 01:19:58.603613 kubelet[2548]: I0416 01:19:58.597988 2548 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 01:19:58.603613 kubelet[2548]: I0416 01:19:58.602323 2548 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 01:19:58.608371 kubelet[2548]: I0416 01:19:58.608143 2548 server.go:317] "Adding debug handlers to kubelet server" Apr 16 01:19:58.609331 kubelet[2548]: I0416 01:19:58.609181 2548 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 01:19:58.611175 kubelet[2548]: E0416 01:19:58.611162 2548 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 01:19:58.612253 kubelet[2548]: I0416 01:19:58.612242 2548 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 16 01:19:58.612373 kubelet[2548]: I0416 01:19:58.612367 2548 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 01:19:58.612654 kubelet[2548]: I0416 01:19:58.612647 2548 reconciler.go:29] "Reconciler: start to sync state" Apr 16 01:19:58.616964 kubelet[2548]: I0416 01:19:58.616953 2548 factory.go:223] Registration of the containerd container factory successfully Apr 16 01:19:58.617023 kubelet[2548]: I0416 01:19:58.617019 2548 factory.go:223] Registration of the systemd container factory successfully Apr 16 01:19:58.617111 kubelet[2548]: I0416 01:19:58.617101 2548 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 01:19:58.628595 kubelet[2548]: I0416 01:19:58.627211 2548 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 01:19:58.710148 kubelet[2548]: I0416 01:19:58.709133 2548 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 01:19:58.726136 kubelet[2548]: I0416 01:19:58.725339 2548 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 01:19:58.726136 kubelet[2548]: I0416 01:19:58.725360 2548 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 16 01:19:58.726136 kubelet[2548]: I0416 01:19:58.725383 2548 kubelet.go:2501] "Starting kubelet main sync loop" Apr 16 01:19:58.726136 kubelet[2548]: E0416 01:19:58.725425 2548 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 01:19:58.752096 kubelet[2548]: I0416 01:19:58.751919 2548 cpu_manager.go:225] "Starting" policy="none" Apr 16 01:19:58.752096 kubelet[2548]: I0416 01:19:58.752014 2548 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 16 01:19:58.752096 kubelet[2548]: I0416 01:19:58.752028 2548 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 16 01:19:58.752096 kubelet[2548]: I0416 01:19:58.752117 2548 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 16 01:19:58.752096 kubelet[2548]: I0416 01:19:58.752124 2548 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 16 01:19:58.752096 kubelet[2548]: I0416 01:19:58.752139 2548 policy_none.go:50] "Start" Apr 16 01:19:58.752096 kubelet[2548]: I0416 01:19:58.752148 2548 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 01:19:58.752096 kubelet[2548]: I0416 01:19:58.752155 2548 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 01:19:58.752619 kubelet[2548]: I0416 01:19:58.752230 2548 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 01:19:58.752619 kubelet[2548]: I0416 01:19:58.752236 2548 policy_none.go:44] "Start" Apr 16 01:19:58.765453 kubelet[2548]: E0416 01:19:58.764964 2548 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 01:19:58.765453 kubelet[2548]: I0416 01:19:58.765309 2548 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 16 01:19:58.765453 kubelet[2548]: I0416 01:19:58.765317 2548 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 01:19:58.769029 kubelet[2548]: I0416 01:19:58.767247 2548 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 16 01:19:58.780305 kubelet[2548]: E0416 01:19:58.779952 2548 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 01:19:58.842387 kubelet[2548]: I0416 01:19:58.828982 2548 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 01:19:58.842387 kubelet[2548]: I0416 01:19:58.830013 2548 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 01:19:58.842387 kubelet[2548]: I0416 01:19:58.832651 2548 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 01:19:58.909821 kubelet[2548]: I0416 01:19:58.908954 2548 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 01:19:58.916108 kubelet[2548]: I0416 01:19:58.915638 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28614f0b25f8a1b92edf831048d5d707-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"28614f0b25f8a1b92edf831048d5d707\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:19:58.919083 kubelet[2548]: I0416 01:19:58.917347 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:19:58.919083 kubelet[2548]: I0416 01:19:58.917377 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:19:58.919083 kubelet[2548]: I0416 01:19:58.917397 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:19:58.919083 kubelet[2548]: I0416 01:19:58.917413 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 16 01:19:58.919083 kubelet[2548]: I0416 01:19:58.917424 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28614f0b25f8a1b92edf831048d5d707-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"28614f0b25f8a1b92edf831048d5d707\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:19:58.919225 kubelet[2548]: I0416 01:19:58.917433 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28614f0b25f8a1b92edf831048d5d707-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"28614f0b25f8a1b92edf831048d5d707\") " pod="kube-system/kube-apiserver-localhost" Apr 16 01:19:58.919225 kubelet[2548]: I0416 01:19:58.917443 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:19:58.919225 kubelet[2548]: I0416 01:19:58.917453 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 01:19:58.922257 kubelet[2548]: I0416 01:19:58.921896 2548 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 16 01:19:58.922257 kubelet[2548]: I0416 01:19:58.922033 2548 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 16 01:19:59.153862 kubelet[2548]: E0416 01:19:59.152645 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:59.153862 kubelet[2548]: E0416 01:19:59.152871 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:59.153862 kubelet[2548]: E0416 01:19:59.152892 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:59.557068 kubelet[2548]: I0416 01:19:59.545019 2548 apiserver.go:52] "Watching apiserver" Apr 16 01:19:59.614129 kubelet[2548]: I0416 01:19:59.613434 2548 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 01:19:59.831443 kubelet[2548]: E0416 01:19:59.827525 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:59.831443 kubelet[2548]: I0416 01:19:59.828307 2548 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 01:19:59.831443 kubelet[2548]: I0416 01:19:59.828523 2548 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 01:19:59.873999 kubelet[2548]: E0416 01:19:59.869257 2548 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 16 01:19:59.873999 kubelet[2548]: E0416 01:19:59.869882 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:19:59.946440 kubelet[2548]: E0416 01:19:59.943925 2548 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 16 01:19:59.948058 kubelet[2548]: E0416 01:19:59.948043 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:00.301221 kubelet[2548]: I0416 01:20:00.297133 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.29712024 podStartE2EDuration="2.29712024s" podCreationTimestamp="2026-04-16 01:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:20:00.295373389 +0000 UTC m=+1.987507683" watchObservedRunningTime="2026-04-16 01:20:00.29712024 +0000 UTC m=+1.989254522" Apr 16 01:20:00.301221 kubelet[2548]: I0416 01:20:00.297220 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.29721688 podStartE2EDuration="2.29721688s" podCreationTimestamp="2026-04-16 01:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:20:00.273400621 +0000 UTC m=+1.965534918" watchObservedRunningTime="2026-04-16 01:20:00.29721688 +0000 UTC m=+1.989351170" Apr 16 01:20:00.347943 kubelet[2548]: I0416 01:20:00.347389 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.347380071 podStartE2EDuration="2.347380071s" podCreationTimestamp="2026-04-16 01:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:20:00.321860327 +0000 UTC m=+2.013994609" watchObservedRunningTime="2026-04-16 01:20:00.347380071 +0000 UTC m=+2.039514364" Apr 16 01:20:00.831632 kubelet[2548]: E0416 01:20:00.831388 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:00.834010 kubelet[2548]: E0416 01:20:00.832435 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:00.834010 kubelet[2548]: E0416 01:20:00.832586 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:01.836080 kubelet[2548]: E0416 01:20:01.835487 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:02.302288 kubelet[2548]: I0416 01:20:02.301570 2548 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 01:20:02.302989 kubelet[2548]: I0416 01:20:02.302897 2548 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 01:20:02.303040 containerd[1479]: time="2026-04-16T01:20:02.302301107Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 01:20:02.837471 kubelet[2548]: E0416 01:20:02.837012 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:03.755113 kubelet[2548]: E0416 01:20:03.754622 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:03.758595 kubelet[2548]: E0416 01:20:03.757497 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:03.860383 systemd[1]: Created slice kubepods-besteffort-pod2a1b9e4f_ad5a_4c64_992d_437c07f4cc30.slice - libcontainer container kubepods-besteffort-pod2a1b9e4f_ad5a_4c64_992d_437c07f4cc30.slice. Apr 16 01:20:03.914185 systemd[1]: Created slice kubepods-besteffort-podeafd307a_c2c8_40e9_aa3f_46c7635806e6.slice - libcontainer container kubepods-besteffort-podeafd307a_c2c8_40e9_aa3f_46c7635806e6.slice. Apr 16 01:20:03.959533 kubelet[2548]: I0416 01:20:03.959136 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2q6f\" (UniqueName: \"kubernetes.io/projected/2a1b9e4f-ad5a-4c64-992d-437c07f4cc30-kube-api-access-m2q6f\") pod \"kube-proxy-6dxvm\" (UID: \"2a1b9e4f-ad5a-4c64-992d-437c07f4cc30\") " pod="kube-system/kube-proxy-6dxvm" Apr 16 01:20:03.959533 kubelet[2548]: I0416 01:20:03.959253 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a1b9e4f-ad5a-4c64-992d-437c07f4cc30-lib-modules\") pod \"kube-proxy-6dxvm\" (UID: \"2a1b9e4f-ad5a-4c64-992d-437c07f4cc30\") " pod="kube-system/kube-proxy-6dxvm" Apr 16 01:20:03.959533 kubelet[2548]: I0416 01:20:03.959267 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95pdn\" (UniqueName: \"kubernetes.io/projected/eafd307a-c2c8-40e9-aa3f-46c7635806e6-kube-api-access-95pdn\") pod \"tigera-operator-6cf4cccc57-kqbqw\" (UID: \"eafd307a-c2c8-40e9-aa3f-46c7635806e6\") " pod="tigera-operator/tigera-operator-6cf4cccc57-kqbqw" Apr 16 01:20:03.959533 kubelet[2548]: I0416 01:20:03.959281 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2a1b9e4f-ad5a-4c64-992d-437c07f4cc30-kube-proxy\") pod \"kube-proxy-6dxvm\" (UID: \"2a1b9e4f-ad5a-4c64-992d-437c07f4cc30\") " pod="kube-system/kube-proxy-6dxvm" Apr 16 01:20:03.959533 kubelet[2548]: I0416 01:20:03.959377 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a1b9e4f-ad5a-4c64-992d-437c07f4cc30-xtables-lock\") pod \"kube-proxy-6dxvm\" (UID: \"2a1b9e4f-ad5a-4c64-992d-437c07f4cc30\") " pod="kube-system/kube-proxy-6dxvm" Apr 16 01:20:03.960208 kubelet[2548]: I0416 01:20:03.959390 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eafd307a-c2c8-40e9-aa3f-46c7635806e6-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-kqbqw\" (UID: \"eafd307a-c2c8-40e9-aa3f-46c7635806e6\") " pod="tigera-operator/tigera-operator-6cf4cccc57-kqbqw" Apr 16 01:20:04.192458 kubelet[2548]: E0416 01:20:04.192124 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:04.193118 containerd[1479]: time="2026-04-16T01:20:04.192586115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6dxvm,Uid:2a1b9e4f-ad5a-4c64-992d-437c07f4cc30,Namespace:kube-system,Attempt:0,}" Apr 16 01:20:04.225083 containerd[1479]: time="2026-04-16T01:20:04.224641261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-kqbqw,Uid:eafd307a-c2c8-40e9-aa3f-46c7635806e6,Namespace:tigera-operator,Attempt:0,}" Apr 16 01:20:04.909158 containerd[1479]: time="2026-04-16T01:20:04.908453216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:20:04.909158 containerd[1479]: time="2026-04-16T01:20:04.909076562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:20:04.909158 containerd[1479]: time="2026-04-16T01:20:04.909162016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:20:04.911808 containerd[1479]: time="2026-04-16T01:20:04.910395012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:20:04.917819 update_engine[1462]: I20260416 01:20:04.915668 1462 update_attempter.cc:509] Updating boot flags... Apr 16 01:20:05.071157 containerd[1479]: time="2026-04-16T01:20:05.026891192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:20:05.072527 containerd[1479]: time="2026-04-16T01:20:05.072399041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:20:05.073074 containerd[1479]: time="2026-04-16T01:20:05.072805451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:20:05.075115 containerd[1479]: time="2026-04-16T01:20:05.074822792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:20:05.111893 systemd[1]: Started cri-containerd-a02d834de9e9b74b43954ac853f2769c5abff2fac3039b4f8cf7d52426f7c1d7.scope - libcontainer container a02d834de9e9b74b43954ac853f2769c5abff2fac3039b4f8cf7d52426f7c1d7. Apr 16 01:20:05.142937 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2651) Apr 16 01:20:05.838465 systemd[1]: Started cri-containerd-f92d3beeafa37eb62c815da439716d530e671944840fdd7a718ec3802c220cae.scope - libcontainer container f92d3beeafa37eb62c815da439716d530e671944840fdd7a718ec3802c220cae. Apr 16 01:20:05.946250 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2643) Apr 16 01:20:06.076862 containerd[1479]: time="2026-04-16T01:20:06.076497392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6dxvm,Uid:2a1b9e4f-ad5a-4c64-992d-437c07f4cc30,Namespace:kube-system,Attempt:0,} returns sandbox id \"f92d3beeafa37eb62c815da439716d530e671944840fdd7a718ec3802c220cae\"" Apr 16 01:20:06.078366 kubelet[2548]: E0416 01:20:06.077929 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:06.090624 containerd[1479]: time="2026-04-16T01:20:06.090159487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-kqbqw,Uid:eafd307a-c2c8-40e9-aa3f-46c7635806e6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a02d834de9e9b74b43954ac853f2769c5abff2fac3039b4f8cf7d52426f7c1d7\"" Apr 16 01:20:06.096060 containerd[1479]: time="2026-04-16T01:20:06.094539106Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 01:20:06.096296 containerd[1479]: time="2026-04-16T01:20:06.096218803Z" level=info msg="CreateContainer within sandbox \"f92d3beeafa37eb62c815da439716d530e671944840fdd7a718ec3802c220cae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 01:20:06.155036 containerd[1479]: time="2026-04-16T01:20:06.153234132Z" level=info msg="CreateContainer within sandbox \"f92d3beeafa37eb62c815da439716d530e671944840fdd7a718ec3802c220cae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"75698258c5b875eff537ab1db2a3f736c86e0f79fc8ced328896d82d6df77ba9\"" Apr 16 01:20:06.157944 containerd[1479]: time="2026-04-16T01:20:06.157610107Z" level=info msg="StartContainer for \"75698258c5b875eff537ab1db2a3f736c86e0f79fc8ced328896d82d6df77ba9\"" Apr 16 01:20:06.498634 systemd[1]: Started cri-containerd-75698258c5b875eff537ab1db2a3f736c86e0f79fc8ced328896d82d6df77ba9.scope - libcontainer container 75698258c5b875eff537ab1db2a3f736c86e0f79fc8ced328896d82d6df77ba9. Apr 16 01:20:07.110541 containerd[1479]: time="2026-04-16T01:20:07.108571436Z" level=info msg="StartContainer for \"75698258c5b875eff537ab1db2a3f736c86e0f79fc8ced328896d82d6df77ba9\" returns successfully" Apr 16 01:20:07.780370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954517426.mount: Deactivated successfully. Apr 16 01:20:07.984432 kubelet[2548]: E0416 01:20:07.983425 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:08.997862 kubelet[2548]: E0416 01:20:08.997116 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:12.374091 containerd[1479]: time="2026-04-16T01:20:12.373409809Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:20:12.378406 containerd[1479]: time="2026-04-16T01:20:12.377959142Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 16 01:20:12.381353 containerd[1479]: time="2026-04-16T01:20:12.381125161Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:20:12.388051 containerd[1479]: time="2026-04-16T01:20:12.387941807Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:20:12.391222 containerd[1479]: time="2026-04-16T01:20:12.389961404Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 6.295391231s" Apr 16 01:20:12.391222 containerd[1479]: time="2026-04-16T01:20:12.389989773Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 16 01:20:12.407636 kubelet[2548]: E0416 01:20:12.407432 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:12.417109 containerd[1479]: time="2026-04-16T01:20:12.415104521Z" level=info msg="CreateContainer within sandbox \"a02d834de9e9b74b43954ac853f2769c5abff2fac3039b4f8cf7d52426f7c1d7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 01:20:12.429648 kubelet[2548]: I0416 01:20:12.429570 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-6dxvm" podStartSLOduration=9.429559315 podStartE2EDuration="9.429559315s" podCreationTimestamp="2026-04-16 01:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:20:08.04609734 +0000 UTC m=+9.738231630" watchObservedRunningTime="2026-04-16 01:20:12.429559315 +0000 UTC m=+14.121693608" Apr 16 01:20:12.457015 containerd[1479]: time="2026-04-16T01:20:12.456336051Z" level=info msg="CreateContainer within sandbox \"a02d834de9e9b74b43954ac853f2769c5abff2fac3039b4f8cf7d52426f7c1d7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"db710249906b4c5adc334510383238db9be15d2bc51579a0805a866bef69068e\"" Apr 16 01:20:12.459890 containerd[1479]: time="2026-04-16T01:20:12.458305660Z" level=info msg="StartContainer for \"db710249906b4c5adc334510383238db9be15d2bc51579a0805a866bef69068e\"" Apr 16 01:20:12.745187 systemd[1]: Started cri-containerd-db710249906b4c5adc334510383238db9be15d2bc51579a0805a866bef69068e.scope - libcontainer container db710249906b4c5adc334510383238db9be15d2bc51579a0805a866bef69068e. Apr 16 01:20:12.815565 containerd[1479]: time="2026-04-16T01:20:12.815029721Z" level=info msg="StartContainer for \"db710249906b4c5adc334510383238db9be15d2bc51579a0805a866bef69068e\" returns successfully" Apr 16 01:20:13.158102 kubelet[2548]: I0416 01:20:13.157185 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-kqbqw" podStartSLOduration=3.856685813 podStartE2EDuration="10.157151165s" podCreationTimestamp="2026-04-16 01:20:03 +0000 UTC" firstStartedPulling="2026-04-16 01:20:06.093890969 +0000 UTC m=+7.786025255" lastFinishedPulling="2026-04-16 01:20:12.394356324 +0000 UTC m=+14.086490607" observedRunningTime="2026-04-16 01:20:13.156564917 +0000 UTC m=+14.848699204" watchObservedRunningTime="2026-04-16 01:20:13.157151165 +0000 UTC m=+14.849285458" Apr 16 01:20:13.792379 kubelet[2548]: E0416 01:20:13.792223 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:13.792379 kubelet[2548]: E0416 01:20:13.794109 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:21.097127 sudo[1658]: pam_unix(sudo:session): session closed for user root Apr 16 01:20:21.108164 sshd[1655]: pam_unix(sshd:session): session closed for user core Apr 16 01:20:21.116315 systemd[1]: sshd@8-10.0.0.84:22-10.0.0.1:50680.service: Deactivated successfully. Apr 16 01:20:21.124327 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 01:20:21.127174 systemd[1]: session-9.scope: Consumed 7.524s CPU time, 160.2M memory peak, 0B memory swap peak. Apr 16 01:20:21.146283 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Apr 16 01:20:21.156490 systemd-logind[1454]: Removed session 9. Apr 16 01:20:23.560095 systemd[1]: Created slice kubepods-besteffort-pod2c70c532_84ff_475e_ba1a_79d852890e1c.slice - libcontainer container kubepods-besteffort-pod2c70c532_84ff_475e_ba1a_79d852890e1c.slice. Apr 16 01:20:23.625909 kubelet[2548]: I0416 01:20:23.622500 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw48g\" (UniqueName: \"kubernetes.io/projected/2c70c532-84ff-475e-ba1a-79d852890e1c-kube-api-access-lw48g\") pod \"calico-typha-59c555f49f-h8pmz\" (UID: \"2c70c532-84ff-475e-ba1a-79d852890e1c\") " pod="calico-system/calico-typha-59c555f49f-h8pmz" Apr 16 01:20:23.625909 kubelet[2548]: I0416 01:20:23.622665 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c70c532-84ff-475e-ba1a-79d852890e1c-tigera-ca-bundle\") pod \"calico-typha-59c555f49f-h8pmz\" (UID: \"2c70c532-84ff-475e-ba1a-79d852890e1c\") " pod="calico-system/calico-typha-59c555f49f-h8pmz" Apr 16 01:20:23.625909 kubelet[2548]: I0416 01:20:23.623052 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2c70c532-84ff-475e-ba1a-79d852890e1c-typha-certs\") pod \"calico-typha-59c555f49f-h8pmz\" (UID: \"2c70c532-84ff-475e-ba1a-79d852890e1c\") " pod="calico-system/calico-typha-59c555f49f-h8pmz" Apr 16 01:20:23.827648 systemd[1]: Created slice kubepods-besteffort-pod5bfa4575_f3b7_4f5b_a2e9_56bfc7143036.slice - libcontainer container kubepods-besteffort-pod5bfa4575_f3b7_4f5b_a2e9_56bfc7143036.slice. Apr 16 01:20:23.894106 kubelet[2548]: E0416 01:20:23.891265 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:23.894223 containerd[1479]: time="2026-04-16T01:20:23.893060233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59c555f49f-h8pmz,Uid:2c70c532-84ff-475e-ba1a-79d852890e1c,Namespace:calico-system,Attempt:0,}" Apr 16 01:20:23.935239 kubelet[2548]: I0416 01:20:23.931278 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/5bfa4575-f3b7-4f5b-a2e9-56bfc7143036-bpffs\") pod \"calico-node-z8n4h\" (UID: \"5bfa4575-f3b7-4f5b-a2e9-56bfc7143036\") " pod="calico-system/calico-node-z8n4h" Apr 16 01:20:23.935239 kubelet[2548]: I0416 01:20:23.931490 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5bfa4575-f3b7-4f5b-a2e9-56bfc7143036-flexvol-driver-host\") pod \"calico-node-z8n4h\" (UID: \"5bfa4575-f3b7-4f5b-a2e9-56bfc7143036\") " pod="calico-system/calico-node-z8n4h" Apr 16 01:20:23.935239 kubelet[2548]: I0416 01:20:23.931506 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5bfa4575-f3b7-4f5b-a2e9-56bfc7143036-node-certs\") pod \"calico-node-z8n4h\" (UID: \"5bfa4575-f3b7-4f5b-a2e9-56bfc7143036\") " pod="calico-system/calico-node-z8n4h" Apr 16 01:20:23.935239 kubelet[2548]: I0416 01:20:23.931522 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v4p9\" (UniqueName: \"kubernetes.io/projected/5bfa4575-f3b7-4f5b-a2e9-56bfc7143036-kube-api-access-9v4p9\") pod \"calico-node-z8n4h\" (UID: \"5bfa4575-f3b7-4f5b-a2e9-56bfc7143036\") " pod="calico-system/calico-node-z8n4h" Apr 16 01:20:23.935239 kubelet[2548]: I0416 01:20:23.931535 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5bfa4575-f3b7-4f5b-a2e9-56bfc7143036-policysync\") pod \"calico-node-z8n4h\" (UID: \"5bfa4575-f3b7-4f5b-a2e9-56bfc7143036\") " pod="calico-system/calico-node-z8n4h" Apr 16 01:20:23.936057 kubelet[2548]: I0416 01:20:23.931548 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/5bfa4575-f3b7-4f5b-a2e9-56bfc7143036-sys-fs\") pod \"calico-node-z8n4h\" (UID: \"5bfa4575-f3b7-4f5b-a2e9-56bfc7143036\") " pod="calico-system/calico-node-z8n4h" Apr 16 01:20:23.936057 kubelet[2548]: I0416 01:20:23.931588 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5bfa4575-f3b7-4f5b-a2e9-56bfc7143036-var-lib-calico\") pod \"calico-node-z8n4h\" (UID: \"5bfa4575-f3b7-4f5b-a2e9-56bfc7143036\") " pod="calico-system/calico-node-z8n4h" Apr 16 01:20:23.936057 kubelet[2548]: I0416 01:20:23.931603 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5bfa4575-f3b7-4f5b-a2e9-56bfc7143036-cni-bin-dir\") pod \"calico-node-z8n4h\" (UID: \"5bfa4575-f3b7-4f5b-a2e9-56bfc7143036\") " pod="calico-system/calico-node-z8n4h" Apr 16 01:20:23.936057 kubelet[2548]: I0416 01:20:23.931616 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bfa4575-f3b7-4f5b-a2e9-56bfc7143036-tigera-ca-bundle\") pod \"calico-node-z8n4h\" (UID: \"5bfa4575-f3b7-4f5b-a2e9-56bfc7143036\") " pod="calico-system/calico-node-z8n4h" Apr 16 01:20:23.936057 kubelet[2548]: I0416 01:20:23.931633 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5bfa4575-f3b7-4f5b-a2e9-56bfc7143036-cni-log-dir\") pod \"calico-node-z8n4h\" (UID: \"5bfa4575-f3b7-4f5b-a2e9-56bfc7143036\") " pod="calico-system/calico-node-z8n4h" Apr 16 01:20:23.936137 kubelet[2548]: I0416 01:20:23.931648 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bfa4575-f3b7-4f5b-a2e9-56bfc7143036-lib-modules\") pod \"calico-node-z8n4h\" (UID: \"5bfa4575-f3b7-4f5b-a2e9-56bfc7143036\") " pod="calico-system/calico-node-z8n4h" Apr 16 01:20:23.936137 kubelet[2548]: I0416 01:20:23.931659 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5bfa4575-f3b7-4f5b-a2e9-56bfc7143036-cni-net-dir\") pod \"calico-node-z8n4h\" (UID: \"5bfa4575-f3b7-4f5b-a2e9-56bfc7143036\") " pod="calico-system/calico-node-z8n4h" Apr 16 01:20:23.936137 kubelet[2548]: I0416 01:20:23.931827 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/5bfa4575-f3b7-4f5b-a2e9-56bfc7143036-nodeproc\") pod \"calico-node-z8n4h\" (UID: \"5bfa4575-f3b7-4f5b-a2e9-56bfc7143036\") " pod="calico-system/calico-node-z8n4h" Apr 16 01:20:23.936137 kubelet[2548]: I0416 01:20:23.931840 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bfa4575-f3b7-4f5b-a2e9-56bfc7143036-xtables-lock\") pod \"calico-node-z8n4h\" (UID: \"5bfa4575-f3b7-4f5b-a2e9-56bfc7143036\") " pod="calico-system/calico-node-z8n4h" Apr 16 01:20:23.936137 kubelet[2548]: I0416 01:20:23.931862 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5bfa4575-f3b7-4f5b-a2e9-56bfc7143036-var-run-calico\") pod \"calico-node-z8n4h\" (UID: \"5bfa4575-f3b7-4f5b-a2e9-56bfc7143036\") " pod="calico-system/calico-node-z8n4h" Apr 16 01:20:23.990316 containerd[1479]: time="2026-04-16T01:20:23.990221745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:20:23.996066 containerd[1479]: time="2026-04-16T01:20:23.992645508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:20:24.004309 containerd[1479]: time="2026-04-16T01:20:23.995217898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:20:24.004309 containerd[1479]: time="2026-04-16T01:20:23.996291930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:20:24.051091 kubelet[2548]: E0416 01:20:24.048843 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.051091 kubelet[2548]: W0416 01:20:24.048875 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.051091 kubelet[2548]: E0416 01:20:24.049256 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.052190 kubelet[2548]: E0416 01:20:24.052174 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.052660 kubelet[2548]: W0416 01:20:24.052263 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.052660 kubelet[2548]: E0416 01:20:24.052284 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.057184 kubelet[2548]: E0416 01:20:24.057155 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:24.064532 kubelet[2548]: E0416 01:20:24.060393 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.064532 kubelet[2548]: W0416 01:20:24.060530 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.064532 kubelet[2548]: E0416 01:20:24.060547 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.067385 kubelet[2548]: E0416 01:20:24.066060 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.067385 kubelet[2548]: W0416 01:20:24.066179 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.067385 kubelet[2548]: E0416 01:20:24.066228 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.069875 kubelet[2548]: E0416 01:20:24.068430 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.069875 kubelet[2548]: W0416 01:20:24.068441 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.069875 kubelet[2548]: E0416 01:20:24.068453 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.073623 kubelet[2548]: E0416 01:20:24.072150 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.073623 kubelet[2548]: W0416 01:20:24.072163 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.073623 kubelet[2548]: E0416 01:20:24.072175 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.073623 kubelet[2548]: E0416 01:20:24.073151 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.073623 kubelet[2548]: W0416 01:20:24.073160 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.073623 kubelet[2548]: E0416 01:20:24.073170 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.082174 systemd[1]: Started cri-containerd-b0c65ec3fc1c1dcec0e208a4f674ce1368cadbe4b4b74d28804b921aa9387ff7.scope - libcontainer container b0c65ec3fc1c1dcec0e208a4f674ce1368cadbe4b4b74d28804b921aa9387ff7. Apr 16 01:20:24.085280 kubelet[2548]: E0416 01:20:24.084520 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.091207 kubelet[2548]: W0416 01:20:24.090510 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.091207 kubelet[2548]: E0416 01:20:24.090605 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.091207 kubelet[2548]: E0416 01:20:24.091211 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.091343 kubelet[2548]: W0416 01:20:24.091220 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.091343 kubelet[2548]: E0416 01:20:24.091233 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.091343 kubelet[2548]: E0416 01:20:24.091331 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.091343 kubelet[2548]: W0416 01:20:24.091336 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.091343 kubelet[2548]: E0416 01:20:24.091341 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.091524 kubelet[2548]: E0416 01:20:24.091423 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.091524 kubelet[2548]: W0416 01:20:24.091430 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.091524 kubelet[2548]: E0416 01:20:24.091435 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.091524 kubelet[2548]: E0416 01:20:24.091515 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.091524 kubelet[2548]: W0416 01:20:24.091519 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.091524 kubelet[2548]: E0416 01:20:24.091524 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.092316 kubelet[2548]: E0416 01:20:24.092206 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.092316 kubelet[2548]: W0416 01:20:24.092221 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.092316 kubelet[2548]: E0416 01:20:24.092232 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.092402 kubelet[2548]: E0416 01:20:24.092350 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.092402 kubelet[2548]: W0416 01:20:24.092354 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.092402 kubelet[2548]: E0416 01:20:24.092360 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.092456 kubelet[2548]: E0416 01:20:24.092441 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.092456 kubelet[2548]: W0416 01:20:24.092445 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.092456 kubelet[2548]: E0416 01:20:24.092450 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.093919 kubelet[2548]: E0416 01:20:24.092561 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.093919 kubelet[2548]: W0416 01:20:24.092568 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.093919 kubelet[2548]: E0416 01:20:24.092573 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.093919 kubelet[2548]: E0416 01:20:24.093210 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.093919 kubelet[2548]: W0416 01:20:24.093217 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.093919 kubelet[2548]: E0416 01:20:24.093225 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.093919 kubelet[2548]: E0416 01:20:24.093352 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.093919 kubelet[2548]: W0416 01:20:24.093358 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.093919 kubelet[2548]: E0416 01:20:24.093364 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.093919 kubelet[2548]: E0416 01:20:24.093916 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.094189 kubelet[2548]: W0416 01:20:24.093922 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.094189 kubelet[2548]: E0416 01:20:24.094023 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.094189 kubelet[2548]: E0416 01:20:24.094136 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.094189 kubelet[2548]: W0416 01:20:24.094141 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.094189 kubelet[2548]: E0416 01:20:24.094146 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.094305 kubelet[2548]: E0416 01:20:24.094226 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.094305 kubelet[2548]: W0416 01:20:24.094230 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.094305 kubelet[2548]: E0416 01:20:24.094234 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.094381 kubelet[2548]: E0416 01:20:24.094308 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.094381 kubelet[2548]: W0416 01:20:24.094312 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.094381 kubelet[2548]: E0416 01:20:24.094318 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.094435 kubelet[2548]: E0416 01:20:24.094394 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.094435 kubelet[2548]: W0416 01:20:24.094398 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.094435 kubelet[2548]: E0416 01:20:24.094402 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.095210 kubelet[2548]: E0416 01:20:24.095106 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.095210 kubelet[2548]: W0416 01:20:24.095199 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.095210 kubelet[2548]: E0416 01:20:24.095207 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.095611 kubelet[2548]: E0416 01:20:24.095333 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.095611 kubelet[2548]: W0416 01:20:24.095340 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.095611 kubelet[2548]: E0416 01:20:24.095345 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.095611 kubelet[2548]: E0416 01:20:24.095460 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.095611 kubelet[2548]: W0416 01:20:24.095464 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.095611 kubelet[2548]: E0416 01:20:24.095469 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.095611 kubelet[2548]: E0416 01:20:24.095591 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.095611 kubelet[2548]: W0416 01:20:24.095596 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.095611 kubelet[2548]: E0416 01:20:24.095600 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.098272 kubelet[2548]: E0416 01:20:24.096044 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.098272 kubelet[2548]: W0416 01:20:24.096051 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.098272 kubelet[2548]: E0416 01:20:24.096058 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.098272 kubelet[2548]: E0416 01:20:24.098063 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.098272 kubelet[2548]: W0416 01:20:24.098076 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.098272 kubelet[2548]: E0416 01:20:24.098109 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.099654 kubelet[2548]: E0416 01:20:24.098894 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.099654 kubelet[2548]: W0416 01:20:24.098908 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.099654 kubelet[2548]: E0416 01:20:24.098916 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.102814 kubelet[2548]: E0416 01:20:24.099911 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.102814 kubelet[2548]: W0416 01:20:24.099920 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.102814 kubelet[2548]: E0416 01:20:24.099928 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.102814 kubelet[2548]: E0416 01:20:24.100141 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.102814 kubelet[2548]: W0416 01:20:24.100146 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.102814 kubelet[2548]: E0416 01:20:24.100152 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.102814 kubelet[2548]: E0416 01:20:24.100249 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.102814 kubelet[2548]: W0416 01:20:24.100254 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.102814 kubelet[2548]: E0416 01:20:24.100259 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.102814 kubelet[2548]: E0416 01:20:24.100355 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.103107 kubelet[2548]: W0416 01:20:24.100359 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.103107 kubelet[2548]: E0416 01:20:24.100364 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.103107 kubelet[2548]: E0416 01:20:24.100448 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.103107 kubelet[2548]: W0416 01:20:24.100452 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.103107 kubelet[2548]: E0416 01:20:24.100457 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.103107 kubelet[2548]: E0416 01:20:24.100539 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.103107 kubelet[2548]: W0416 01:20:24.100543 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.103107 kubelet[2548]: E0416 01:20:24.100547 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.103107 kubelet[2548]: E0416 01:20:24.100628 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.103107 kubelet[2548]: W0416 01:20:24.100632 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.103352 kubelet[2548]: E0416 01:20:24.100637 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.103352 kubelet[2548]: E0416 01:20:24.101144 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.103352 kubelet[2548]: W0416 01:20:24.101151 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.103352 kubelet[2548]: E0416 01:20:24.101158 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.103352 kubelet[2548]: E0416 01:20:24.101253 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.103352 kubelet[2548]: W0416 01:20:24.101257 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.103352 kubelet[2548]: E0416 01:20:24.101262 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.103352 kubelet[2548]: E0416 01:20:24.101351 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.103352 kubelet[2548]: W0416 01:20:24.101356 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.103352 kubelet[2548]: E0416 01:20:24.101361 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.103491 kubelet[2548]: E0416 01:20:24.101444 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.103491 kubelet[2548]: W0416 01:20:24.101448 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.103491 kubelet[2548]: E0416 01:20:24.101452 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.137915 kubelet[2548]: E0416 01:20:24.137528 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.158085 kubelet[2548]: W0416 01:20:24.148661 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.158085 kubelet[2548]: E0416 01:20:24.157281 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.158085 kubelet[2548]: I0416 01:20:24.157318 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzx8h\" (UniqueName: \"kubernetes.io/projected/36e0b1c3-35d3-4f7d-a631-c6ac0e723311-kube-api-access-qzx8h\") pod \"csi-node-driver-b2mtc\" (UID: \"36e0b1c3-35d3-4f7d-a631-c6ac0e723311\") " pod="calico-system/csi-node-driver-b2mtc" Apr 16 01:20:24.160836 kubelet[2548]: E0416 01:20:24.160435 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.160836 kubelet[2548]: W0416 01:20:24.160578 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.160836 kubelet[2548]: E0416 01:20:24.160656 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.162853 kubelet[2548]: I0416 01:20:24.162194 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36e0b1c3-35d3-4f7d-a631-c6ac0e723311-kubelet-dir\") pod \"csi-node-driver-b2mtc\" (UID: \"36e0b1c3-35d3-4f7d-a631-c6ac0e723311\") " pod="calico-system/csi-node-driver-b2mtc" Apr 16 01:20:24.165847 kubelet[2548]: E0416 01:20:24.164255 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.165847 kubelet[2548]: W0416 01:20:24.164499 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.165847 kubelet[2548]: E0416 01:20:24.164518 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.165847 kubelet[2548]: E0416 01:20:24.165366 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.165847 kubelet[2548]: W0416 01:20:24.165375 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.165847 kubelet[2548]: E0416 01:20:24.165386 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.168367 kubelet[2548]: E0416 01:20:24.166654 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.168367 kubelet[2548]: W0416 01:20:24.167209 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.168367 kubelet[2548]: E0416 01:20:24.167226 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.168367 kubelet[2548]: I0416 01:20:24.167352 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/36e0b1c3-35d3-4f7d-a631-c6ac0e723311-registration-dir\") pod \"csi-node-driver-b2mtc\" (UID: \"36e0b1c3-35d3-4f7d-a631-c6ac0e723311\") " pod="calico-system/csi-node-driver-b2mtc" Apr 16 01:20:24.173172 kubelet[2548]: E0416 01:20:24.170197 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.173172 kubelet[2548]: W0416 01:20:24.170249 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.173172 kubelet[2548]: E0416 01:20:24.170298 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.174061 kubelet[2548]: E0416 01:20:24.173594 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.174061 kubelet[2548]: W0416 01:20:24.173612 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.174061 kubelet[2548]: E0416 01:20:24.173663 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.174061 kubelet[2548]: I0416 01:20:24.173929 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/36e0b1c3-35d3-4f7d-a631-c6ac0e723311-varrun\") pod \"csi-node-driver-b2mtc\" (UID: \"36e0b1c3-35d3-4f7d-a631-c6ac0e723311\") " pod="calico-system/csi-node-driver-b2mtc" Apr 16 01:20:24.179872 kubelet[2548]: E0416 01:20:24.174904 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.179872 kubelet[2548]: W0416 01:20:24.174917 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.179872 kubelet[2548]: E0416 01:20:24.174926 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.182620 kubelet[2548]: E0416 01:20:24.182383 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.182620 kubelet[2548]: W0416 01:20:24.182488 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.182620 kubelet[2548]: E0416 01:20:24.182531 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.183434 kubelet[2548]: E0416 01:20:24.183183 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.183434 kubelet[2548]: W0416 01:20:24.183190 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.183434 kubelet[2548]: E0416 01:20:24.183199 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.183434 kubelet[2548]: E0416 01:20:24.183304 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.183434 kubelet[2548]: W0416 01:20:24.183309 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.183434 kubelet[2548]: E0416 01:20:24.183314 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.183434 kubelet[2548]: E0416 01:20:24.183414 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.183434 kubelet[2548]: W0416 01:20:24.183420 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.183434 kubelet[2548]: E0416 01:20:24.183424 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.185930 kubelet[2548]: E0416 01:20:24.183902 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.185930 kubelet[2548]: W0416 01:20:24.183913 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.185930 kubelet[2548]: E0416 01:20:24.183921 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.185930 kubelet[2548]: I0416 01:20:24.184287 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/36e0b1c3-35d3-4f7d-a631-c6ac0e723311-socket-dir\") pod \"csi-node-driver-b2mtc\" (UID: \"36e0b1c3-35d3-4f7d-a631-c6ac0e723311\") " pod="calico-system/csi-node-driver-b2mtc" Apr 16 01:20:24.190481 kubelet[2548]: E0416 01:20:24.187141 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.190481 kubelet[2548]: W0416 01:20:24.187169 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.190481 kubelet[2548]: E0416 01:20:24.187224 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.192190 kubelet[2548]: E0416 01:20:24.191488 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.192190 kubelet[2548]: W0416 01:20:24.191630 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.192278 kubelet[2548]: E0416 01:20:24.192213 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.196278 kubelet[2548]: E0416 01:20:24.196251 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.196346 kubelet[2548]: W0416 01:20:24.196335 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.197865 kubelet[2548]: E0416 01:20:24.196389 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.291060 kubelet[2548]: E0416 01:20:24.290931 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.292224 kubelet[2548]: W0416 01:20:24.292196 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.293877 kubelet[2548]: E0416 01:20:24.292333 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.302870 kubelet[2548]: E0416 01:20:24.301414 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.302870 kubelet[2548]: W0416 01:20:24.301434 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.302870 kubelet[2548]: E0416 01:20:24.301476 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.303134 kubelet[2548]: E0416 01:20:24.303065 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.303134 kubelet[2548]: W0416 01:20:24.303076 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.303134 kubelet[2548]: E0416 01:20:24.303088 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.303320 kubelet[2548]: E0416 01:20:24.303213 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.303320 kubelet[2548]: W0416 01:20:24.303218 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.303320 kubelet[2548]: E0416 01:20:24.303223 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.309891 kubelet[2548]: E0416 01:20:24.306442 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.309891 kubelet[2548]: W0416 01:20:24.306452 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.309891 kubelet[2548]: E0416 01:20:24.306460 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.309891 kubelet[2548]: E0416 01:20:24.308121 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.309891 kubelet[2548]: W0416 01:20:24.308128 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.309891 kubelet[2548]: E0416 01:20:24.308136 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.309891 kubelet[2548]: E0416 01:20:24.308516 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.309891 kubelet[2548]: W0416 01:20:24.308523 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.309891 kubelet[2548]: E0416 01:20:24.308531 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.309891 kubelet[2548]: E0416 01:20:24.309281 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.310291 kubelet[2548]: W0416 01:20:24.309291 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.310291 kubelet[2548]: E0416 01:20:24.309299 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.313280 kubelet[2548]: E0416 01:20:24.312126 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.313280 kubelet[2548]: W0416 01:20:24.312405 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.313280 kubelet[2548]: E0416 01:20:24.312416 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.313280 kubelet[2548]: E0416 01:20:24.313103 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.313280 kubelet[2548]: W0416 01:20:24.313114 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.313280 kubelet[2548]: E0416 01:20:24.313126 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.316904 kubelet[2548]: E0416 01:20:24.315599 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.316904 kubelet[2548]: W0416 01:20:24.315610 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.316904 kubelet[2548]: E0416 01:20:24.315618 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.324175 kubelet[2548]: E0416 01:20:24.322553 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.324175 kubelet[2548]: W0416 01:20:24.322579 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.324175 kubelet[2548]: E0416 01:20:24.322620 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.326180 kubelet[2548]: E0416 01:20:24.325185 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.326180 kubelet[2548]: W0416 01:20:24.325312 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.326180 kubelet[2548]: E0416 01:20:24.325326 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.326180 kubelet[2548]: E0416 01:20:24.325597 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.326180 kubelet[2548]: W0416 01:20:24.325603 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.326180 kubelet[2548]: E0416 01:20:24.325610 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.326302 kubelet[2548]: E0416 01:20:24.326200 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.326302 kubelet[2548]: W0416 01:20:24.326208 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.326302 kubelet[2548]: E0416 01:20:24.326215 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.328815 kubelet[2548]: E0416 01:20:24.326641 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.328815 kubelet[2548]: W0416 01:20:24.327030 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.328815 kubelet[2548]: E0416 01:20:24.327039 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.328815 kubelet[2548]: E0416 01:20:24.327235 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.328815 kubelet[2548]: W0416 01:20:24.327240 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.328815 kubelet[2548]: E0416 01:20:24.327245 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.328815 kubelet[2548]: E0416 01:20:24.328229 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.328815 kubelet[2548]: W0416 01:20:24.328236 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.328815 kubelet[2548]: E0416 01:20:24.328244 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.328815 kubelet[2548]: E0416 01:20:24.328512 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.329103 kubelet[2548]: W0416 01:20:24.328517 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.329103 kubelet[2548]: E0416 01:20:24.328523 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.330142 kubelet[2548]: E0416 01:20:24.329300 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.330142 kubelet[2548]: W0416 01:20:24.329385 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.330142 kubelet[2548]: E0416 01:20:24.329392 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.330142 kubelet[2548]: E0416 01:20:24.329607 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.330142 kubelet[2548]: W0416 01:20:24.329617 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.330142 kubelet[2548]: E0416 01:20:24.329628 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.333920 kubelet[2548]: E0416 01:20:24.330548 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.333920 kubelet[2548]: W0416 01:20:24.330894 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.333920 kubelet[2548]: E0416 01:20:24.330905 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.333920 kubelet[2548]: E0416 01:20:24.331116 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.333920 kubelet[2548]: W0416 01:20:24.331122 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.333920 kubelet[2548]: E0416 01:20:24.331129 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.333920 kubelet[2548]: E0416 01:20:24.331239 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.333920 kubelet[2548]: W0416 01:20:24.331243 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.333920 kubelet[2548]: E0416 01:20:24.331249 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.333920 kubelet[2548]: E0416 01:20:24.331345 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.334187 kubelet[2548]: W0416 01:20:24.331349 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.334187 kubelet[2548]: E0416 01:20:24.331354 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.342347 kubelet[2548]: E0416 01:20:24.342052 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:24.342347 kubelet[2548]: W0416 01:20:24.342067 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:24.342347 kubelet[2548]: E0416 01:20:24.342079 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:24.399926 containerd[1479]: time="2026-04-16T01:20:24.399599768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59c555f49f-h8pmz,Uid:2c70c532-84ff-475e-ba1a-79d852890e1c,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0c65ec3fc1c1dcec0e208a4f674ce1368cadbe4b4b74d28804b921aa9387ff7\"" Apr 16 01:20:24.401124 kubelet[2548]: E0416 01:20:24.401104 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:24.408279 containerd[1479]: time="2026-04-16T01:20:24.407488697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 01:20:24.457637 containerd[1479]: time="2026-04-16T01:20:24.457457930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z8n4h,Uid:5bfa4575-f3b7-4f5b-a2e9-56bfc7143036,Namespace:calico-system,Attempt:0,}" Apr 16 01:20:24.619103 containerd[1479]: time="2026-04-16T01:20:24.612553369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:20:24.619103 containerd[1479]: time="2026-04-16T01:20:24.618649869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:20:24.619103 containerd[1479]: time="2026-04-16T01:20:24.618661376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:20:24.622390 containerd[1479]: time="2026-04-16T01:20:24.621645618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:20:24.676875 systemd[1]: Started cri-containerd-ad07eb38ec65434f4ab6ffeb0731290c73622c26118856bdd39b2f4f6ca9cb2f.scope - libcontainer container ad07eb38ec65434f4ab6ffeb0731290c73622c26118856bdd39b2f4f6ca9cb2f. Apr 16 01:20:24.801384 containerd[1479]: time="2026-04-16T01:20:24.801195349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z8n4h,Uid:5bfa4575-f3b7-4f5b-a2e9-56bfc7143036,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad07eb38ec65434f4ab6ffeb0731290c73622c26118856bdd39b2f4f6ca9cb2f\"" Apr 16 01:20:25.728367 kubelet[2548]: E0416 01:20:25.728174 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:26.155138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627746849.mount: Deactivated successfully. Apr 16 01:20:27.727327 kubelet[2548]: E0416 01:20:27.727229 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:28.795944 containerd[1479]: time="2026-04-16T01:20:28.795551146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:20:28.798069 containerd[1479]: time="2026-04-16T01:20:28.798041038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 16 01:20:28.801396 containerd[1479]: time="2026-04-16T01:20:28.800903600Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:20:28.806463 containerd[1479]: time="2026-04-16T01:20:28.805996848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:20:28.807334 containerd[1479]: time="2026-04-16T01:20:28.807064065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 4.399542815s" Apr 16 01:20:28.807334 containerd[1479]: time="2026-04-16T01:20:28.807253249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 16 01:20:28.809602 containerd[1479]: time="2026-04-16T01:20:28.809200375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 01:20:28.842558 containerd[1479]: time="2026-04-16T01:20:28.842353825Z" level=info msg="CreateContainer within sandbox \"b0c65ec3fc1c1dcec0e208a4f674ce1368cadbe4b4b74d28804b921aa9387ff7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 01:20:28.891510 containerd[1479]: time="2026-04-16T01:20:28.891399595Z" level=info msg="CreateContainer within sandbox \"b0c65ec3fc1c1dcec0e208a4f674ce1368cadbe4b4b74d28804b921aa9387ff7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8b25efb8b1ab474c7c1ca95a65c9619a084d810b75c9a8ba0b37367bac724e52\"" Apr 16 01:20:28.893657 containerd[1479]: time="2026-04-16T01:20:28.893471347Z" level=info msg="StartContainer for \"8b25efb8b1ab474c7c1ca95a65c9619a084d810b75c9a8ba0b37367bac724e52\"" Apr 16 01:20:28.988034 systemd[1]: Started cri-containerd-8b25efb8b1ab474c7c1ca95a65c9619a084d810b75c9a8ba0b37367bac724e52.scope - libcontainer container 8b25efb8b1ab474c7c1ca95a65c9619a084d810b75c9a8ba0b37367bac724e52. Apr 16 01:20:29.115619 containerd[1479]: time="2026-04-16T01:20:29.115264139Z" level=info msg="StartContainer for \"8b25efb8b1ab474c7c1ca95a65c9619a084d810b75c9a8ba0b37367bac724e52\" returns successfully" Apr 16 01:20:29.342604 kubelet[2548]: E0416 01:20:29.340664 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:29.360500 kubelet[2548]: E0416 01:20:29.360476 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.360880 kubelet[2548]: W0416 01:20:29.360632 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.360880 kubelet[2548]: E0416 01:20:29.360652 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.361289 kubelet[2548]: E0416 01:20:29.361242 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.361289 kubelet[2548]: W0416 01:20:29.361253 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.361289 kubelet[2548]: E0416 01:20:29.361262 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.362208 kubelet[2548]: E0416 01:20:29.361888 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.362208 kubelet[2548]: W0416 01:20:29.361897 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.362208 kubelet[2548]: E0416 01:20:29.361905 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.363651 kubelet[2548]: E0416 01:20:29.363639 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.364386 kubelet[2548]: W0416 01:20:29.364242 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.364386 kubelet[2548]: E0416 01:20:29.364257 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.364559 kubelet[2548]: E0416 01:20:29.364507 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.364559 kubelet[2548]: W0416 01:20:29.364517 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.364559 kubelet[2548]: E0416 01:20:29.364527 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.365241 kubelet[2548]: E0416 01:20:29.365074 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.365241 kubelet[2548]: W0416 01:20:29.365083 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.365241 kubelet[2548]: E0416 01:20:29.365090 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.365420 kubelet[2548]: E0416 01:20:29.365413 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.365470 kubelet[2548]: W0416 01:20:29.365462 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.365514 kubelet[2548]: E0416 01:20:29.365509 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.366211 kubelet[2548]: E0416 01:20:29.365645 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.368279 kubelet[2548]: W0416 01:20:29.367663 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.368279 kubelet[2548]: E0416 01:20:29.367924 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.369494 kubelet[2548]: E0416 01:20:29.368668 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.369494 kubelet[2548]: W0416 01:20:29.369283 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.369494 kubelet[2548]: E0416 01:20:29.369325 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.372301 kubelet[2548]: E0416 01:20:29.371299 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.372301 kubelet[2548]: W0416 01:20:29.371312 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.372301 kubelet[2548]: E0416 01:20:29.371346 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.372301 kubelet[2548]: E0416 01:20:29.372072 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.372301 kubelet[2548]: W0416 01:20:29.372086 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.372301 kubelet[2548]: E0416 01:20:29.372094 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.372606 kubelet[2548]: E0416 01:20:29.372555 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.372606 kubelet[2548]: W0416 01:20:29.372563 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.372606 kubelet[2548]: E0416 01:20:29.372570 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.374593 kubelet[2548]: E0416 01:20:29.374435 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.374593 kubelet[2548]: W0416 01:20:29.374452 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.374593 kubelet[2548]: E0416 01:20:29.374484 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.376546 kubelet[2548]: E0416 01:20:29.376487 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.377437 kubelet[2548]: W0416 01:20:29.377078 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.377437 kubelet[2548]: E0416 01:20:29.377378 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.377634 kubelet[2548]: E0416 01:20:29.377626 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.377988 kubelet[2548]: W0416 01:20:29.377910 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.377988 kubelet[2548]: E0416 01:20:29.377928 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.382978 kubelet[2548]: I0416 01:20:29.382479 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-59c555f49f-h8pmz" podStartSLOduration=1.978999588 podStartE2EDuration="6.382454527s" podCreationTimestamp="2026-04-16 01:20:23 +0000 UTC" firstStartedPulling="2026-04-16 01:20:24.405098574 +0000 UTC m=+26.097232860" lastFinishedPulling="2026-04-16 01:20:28.808553516 +0000 UTC m=+30.500687799" observedRunningTime="2026-04-16 01:20:29.381909099 +0000 UTC m=+31.074043385" watchObservedRunningTime="2026-04-16 01:20:29.382454527 +0000 UTC m=+31.074588820" Apr 16 01:20:29.388423 kubelet[2548]: E0416 01:20:29.387998 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.388423 kubelet[2548]: W0416 01:20:29.388014 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.388423 kubelet[2548]: E0416 01:20:29.388026 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.389448 kubelet[2548]: E0416 01:20:29.389435 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.389512 kubelet[2548]: W0416 01:20:29.389505 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.389546 kubelet[2548]: E0416 01:20:29.389541 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.393057 kubelet[2548]: E0416 01:20:29.393039 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.393358 kubelet[2548]: W0416 01:20:29.393348 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.393840 kubelet[2548]: E0416 01:20:29.393412 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.395575 kubelet[2548]: E0416 01:20:29.395480 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.396538 kubelet[2548]: W0416 01:20:29.396501 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.396865 kubelet[2548]: E0416 01:20:29.396605 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.402501 kubelet[2548]: E0416 01:20:29.402013 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.402501 kubelet[2548]: W0416 01:20:29.402500 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.403973 kubelet[2548]: E0416 01:20:29.402567 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.409925 kubelet[2548]: E0416 01:20:29.409883 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.409925 kubelet[2548]: W0416 01:20:29.409899 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.409925 kubelet[2548]: E0416 01:20:29.409911 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.411480 kubelet[2548]: E0416 01:20:29.411186 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.411480 kubelet[2548]: W0416 01:20:29.411197 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.411480 kubelet[2548]: E0416 01:20:29.411206 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.413319 kubelet[2548]: E0416 01:20:29.412874 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.413319 kubelet[2548]: W0416 01:20:29.412971 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.413319 kubelet[2548]: E0416 01:20:29.412982 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.414277 kubelet[2548]: E0416 01:20:29.413954 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.414277 kubelet[2548]: W0416 01:20:29.413965 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.414277 kubelet[2548]: E0416 01:20:29.413975 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.420614 kubelet[2548]: E0416 01:20:29.420291 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.420614 kubelet[2548]: W0416 01:20:29.420387 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.420614 kubelet[2548]: E0416 01:20:29.420400 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.423546 kubelet[2548]: E0416 01:20:29.422992 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.423546 kubelet[2548]: W0416 01:20:29.423195 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.423546 kubelet[2548]: E0416 01:20:29.423216 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.425351 kubelet[2548]: E0416 01:20:29.425048 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.425351 kubelet[2548]: W0416 01:20:29.425212 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.425351 kubelet[2548]: E0416 01:20:29.425224 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.430265 kubelet[2548]: E0416 01:20:29.430196 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.430265 kubelet[2548]: W0416 01:20:29.430209 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.430265 kubelet[2548]: E0416 01:20:29.430218 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.432553 kubelet[2548]: E0416 01:20:29.431397 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.432553 kubelet[2548]: W0416 01:20:29.431406 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.432553 kubelet[2548]: E0416 01:20:29.431414 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.435327 kubelet[2548]: E0416 01:20:29.435036 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.435327 kubelet[2548]: W0416 01:20:29.435212 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.435327 kubelet[2548]: E0416 01:20:29.435247 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.438997 kubelet[2548]: E0416 01:20:29.438522 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.438997 kubelet[2548]: W0416 01:20:29.438532 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.438997 kubelet[2548]: E0416 01:20:29.438540 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.439635 kubelet[2548]: E0416 01:20:29.439493 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.439635 kubelet[2548]: W0416 01:20:29.439617 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.439635 kubelet[2548]: E0416 01:20:29.439629 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.450211 kubelet[2548]: E0416 01:20:29.449851 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:29.450211 kubelet[2548]: W0416 01:20:29.449948 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:29.450211 kubelet[2548]: E0416 01:20:29.449967 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:29.730024 kubelet[2548]: E0416 01:20:29.726865 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:30.342106 kubelet[2548]: E0416 01:20:30.341630 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:30.393477 kubelet[2548]: E0416 01:20:30.393009 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.393477 kubelet[2548]: W0416 01:20:30.393231 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.393477 kubelet[2548]: E0416 01:20:30.393281 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.398605 kubelet[2548]: E0416 01:20:30.397500 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.398605 kubelet[2548]: W0416 01:20:30.397560 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.398605 kubelet[2548]: E0416 01:20:30.397617 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.399470 kubelet[2548]: E0416 01:20:30.399307 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.399470 kubelet[2548]: W0416 01:20:30.399323 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.399470 kubelet[2548]: E0416 01:20:30.399337 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.399556 kubelet[2548]: E0416 01:20:30.399487 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.399556 kubelet[2548]: W0416 01:20:30.399493 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.399556 kubelet[2548]: E0416 01:20:30.399499 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.400088 kubelet[2548]: E0416 01:20:30.399995 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.400088 kubelet[2548]: W0416 01:20:30.400008 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.400088 kubelet[2548]: E0416 01:20:30.400019 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.401345 kubelet[2548]: E0416 01:20:30.401016 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.401345 kubelet[2548]: W0416 01:20:30.401028 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.401345 kubelet[2548]: E0416 01:20:30.401038 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.403394 kubelet[2548]: E0416 01:20:30.403332 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.403460 kubelet[2548]: W0416 01:20:30.403451 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.403496 kubelet[2548]: E0416 01:20:30.403490 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.408550 kubelet[2548]: E0416 01:20:30.407852 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.408550 kubelet[2548]: W0416 01:20:30.407893 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.408550 kubelet[2548]: E0416 01:20:30.407932 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.414563 kubelet[2548]: E0416 01:20:30.414265 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.414563 kubelet[2548]: W0416 01:20:30.414428 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.414563 kubelet[2548]: E0416 01:20:30.414492 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.421537 kubelet[2548]: E0416 01:20:30.420998 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.421537 kubelet[2548]: W0416 01:20:30.421108 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.421537 kubelet[2548]: E0416 01:20:30.421230 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.422524 kubelet[2548]: E0416 01:20:30.422260 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.422524 kubelet[2548]: W0416 01:20:30.422376 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.422524 kubelet[2548]: E0416 01:20:30.422391 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.423971 kubelet[2548]: E0416 01:20:30.423866 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.423971 kubelet[2548]: W0416 01:20:30.423966 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.423971 kubelet[2548]: E0416 01:20:30.423980 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.429315 kubelet[2548]: E0416 01:20:30.428627 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.429315 kubelet[2548]: W0416 01:20:30.428995 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.429315 kubelet[2548]: E0416 01:20:30.429009 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.436420 kubelet[2548]: E0416 01:20:30.436111 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.436420 kubelet[2548]: W0416 01:20:30.436377 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.436420 kubelet[2548]: E0416 01:20:30.436448 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.444059 kubelet[2548]: E0416 01:20:30.443948 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.444059 kubelet[2548]: W0416 01:20:30.444057 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.444239 kubelet[2548]: E0416 01:20:30.444074 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.449629 kubelet[2548]: E0416 01:20:30.447077 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.449629 kubelet[2548]: W0416 01:20:30.447091 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.449629 kubelet[2548]: E0416 01:20:30.447370 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.450632 kubelet[2548]: E0416 01:20:30.450344 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.450632 kubelet[2548]: W0416 01:20:30.450471 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.450632 kubelet[2548]: E0416 01:20:30.450484 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.453840 kubelet[2548]: E0416 01:20:30.453626 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.453840 kubelet[2548]: W0416 01:20:30.453638 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.453840 kubelet[2548]: E0416 01:20:30.453651 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.454457 kubelet[2548]: E0416 01:20:30.454372 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.454457 kubelet[2548]: W0416 01:20:30.454380 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.454457 kubelet[2548]: E0416 01:20:30.454389 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.455222 kubelet[2548]: E0416 01:20:30.454538 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.455222 kubelet[2548]: W0416 01:20:30.454546 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.455222 kubelet[2548]: E0416 01:20:30.454553 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.455317 kubelet[2548]: E0416 01:20:30.455246 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.455317 kubelet[2548]: W0416 01:20:30.455253 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.455317 kubelet[2548]: E0416 01:20:30.455260 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.455446 kubelet[2548]: E0416 01:20:30.455369 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.455446 kubelet[2548]: W0416 01:20:30.455373 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.455446 kubelet[2548]: E0416 01:20:30.455380 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.455496 kubelet[2548]: E0416 01:20:30.455471 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.455496 kubelet[2548]: W0416 01:20:30.455475 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.455496 kubelet[2548]: E0416 01:20:30.455480 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.456749 kubelet[2548]: E0416 01:20:30.455578 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.456749 kubelet[2548]: W0416 01:20:30.455585 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.456749 kubelet[2548]: E0416 01:20:30.455590 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.456749 kubelet[2548]: E0416 01:20:30.456269 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.456749 kubelet[2548]: W0416 01:20:30.456276 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.456749 kubelet[2548]: E0416 01:20:30.456283 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.459937 kubelet[2548]: E0416 01:20:30.459920 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.460030 kubelet[2548]: W0416 01:20:30.460001 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.460030 kubelet[2548]: E0416 01:20:30.460018 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.461822 kubelet[2548]: E0416 01:20:30.461491 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.461822 kubelet[2548]: W0416 01:20:30.461501 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.461822 kubelet[2548]: E0416 01:20:30.461510 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.462252 kubelet[2548]: E0416 01:20:30.462241 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.462373 kubelet[2548]: W0416 01:20:30.462297 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.462373 kubelet[2548]: E0416 01:20:30.462310 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.463341 kubelet[2548]: E0416 01:20:30.463333 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.463390 kubelet[2548]: W0416 01:20:30.463384 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.463419 kubelet[2548]: E0416 01:20:30.463414 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.464645 kubelet[2548]: E0416 01:20:30.464635 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.465347 kubelet[2548]: W0416 01:20:30.465318 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.465347 kubelet[2548]: E0416 01:20:30.465330 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.468632 kubelet[2548]: E0416 01:20:30.468527 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.469484 kubelet[2548]: W0416 01:20:30.469265 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.469484 kubelet[2548]: E0416 01:20:30.469286 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.477082 kubelet[2548]: E0416 01:20:30.475555 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.477082 kubelet[2548]: W0416 01:20:30.475600 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.477082 kubelet[2548]: E0416 01:20:30.475637 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.477082 kubelet[2548]: E0416 01:20:30.476981 2548 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 01:20:30.477082 kubelet[2548]: W0416 01:20:30.476996 2548 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 01:20:30.477082 kubelet[2548]: E0416 01:20:30.477010 2548 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 01:20:30.965661 containerd[1479]: time="2026-04-16T01:20:30.965090228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:20:30.967567 containerd[1479]: time="2026-04-16T01:20:30.967448942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 16 01:20:30.971022 containerd[1479]: time="2026-04-16T01:20:30.970590697Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:20:30.978628 containerd[1479]: time="2026-04-16T01:20:30.978433643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:20:30.981402 containerd[1479]: time="2026-04-16T01:20:30.981279964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 2.171918864s" Apr 16 01:20:30.981475 containerd[1479]: time="2026-04-16T01:20:30.981404003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 16 01:20:31.002260 containerd[1479]: time="2026-04-16T01:20:31.001537833Z" level=info msg="CreateContainer within sandbox \"ad07eb38ec65434f4ab6ffeb0731290c73622c26118856bdd39b2f4f6ca9cb2f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 01:20:31.053613 containerd[1479]: time="2026-04-16T01:20:31.053367620Z" level=info msg="CreateContainer within sandbox \"ad07eb38ec65434f4ab6ffeb0731290c73622c26118856bdd39b2f4f6ca9cb2f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"90d2fe2d82adb2c700dd435667a9a091d2d46ac71c2f3488c12a9df892b8d4a8\"" Apr 16 01:20:31.056906 containerd[1479]: time="2026-04-16T01:20:31.054907340Z" level=info msg="StartContainer for \"90d2fe2d82adb2c700dd435667a9a091d2d46ac71c2f3488c12a9df892b8d4a8\"" Apr 16 01:20:31.169556 systemd[1]: Started cri-containerd-90d2fe2d82adb2c700dd435667a9a091d2d46ac71c2f3488c12a9df892b8d4a8.scope - libcontainer container 90d2fe2d82adb2c700dd435667a9a091d2d46ac71c2f3488c12a9df892b8d4a8. Apr 16 01:20:31.313952 containerd[1479]: time="2026-04-16T01:20:31.313047426Z" level=info msg="StartContainer for \"90d2fe2d82adb2c700dd435667a9a091d2d46ac71c2f3488c12a9df892b8d4a8\" returns successfully" Apr 16 01:20:31.352101 systemd[1]: cri-containerd-90d2fe2d82adb2c700dd435667a9a091d2d46ac71c2f3488c12a9df892b8d4a8.scope: Deactivated successfully. Apr 16 01:20:31.352903 kubelet[2548]: E0416 01:20:31.352495 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:20:31.475588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90d2fe2d82adb2c700dd435667a9a091d2d46ac71c2f3488c12a9df892b8d4a8-rootfs.mount: Deactivated successfully. Apr 16 01:20:31.491416 containerd[1479]: time="2026-04-16T01:20:31.491154141Z" level=info msg="shim disconnected" id=90d2fe2d82adb2c700dd435667a9a091d2d46ac71c2f3488c12a9df892b8d4a8 namespace=k8s.io Apr 16 01:20:31.491416 containerd[1479]: time="2026-04-16T01:20:31.491358454Z" level=warning msg="cleaning up after shim disconnected" id=90d2fe2d82adb2c700dd435667a9a091d2d46ac71c2f3488c12a9df892b8d4a8 namespace=k8s.io Apr 16 01:20:31.491416 containerd[1479]: time="2026-04-16T01:20:31.491366596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 01:20:31.726920 kubelet[2548]: E0416 01:20:31.726569 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:32.396668 containerd[1479]: time="2026-04-16T01:20:32.396521442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 01:20:33.727037 kubelet[2548]: E0416 01:20:33.726994 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:35.727083 kubelet[2548]: E0416 01:20:35.726517 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:37.728144 kubelet[2548]: E0416 01:20:37.726521 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:39.729605 kubelet[2548]: E0416 01:20:39.727880 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:41.727932 kubelet[2548]: E0416 01:20:41.727032 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:43.728116 kubelet[2548]: E0416 01:20:43.727317 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:45.728217 kubelet[2548]: E0416 01:20:45.727849 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:47.728982 kubelet[2548]: E0416 01:20:47.728209 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:48.585383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3523389684.mount: Deactivated successfully. Apr 16 01:20:48.664574 containerd[1479]: time="2026-04-16T01:20:48.664356366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:20:48.666217 containerd[1479]: time="2026-04-16T01:20:48.666093829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 16 01:20:48.682956 containerd[1479]: time="2026-04-16T01:20:48.681953186Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:20:48.690194 containerd[1479]: time="2026-04-16T01:20:48.690083679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:20:48.692958 containerd[1479]: time="2026-04-16T01:20:48.692514386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 16.295874404s" Apr 16 01:20:48.692958 containerd[1479]: time="2026-04-16T01:20:48.692553504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 16 01:20:48.709907 containerd[1479]: time="2026-04-16T01:20:48.709251276Z" level=info msg="CreateContainer within sandbox \"ad07eb38ec65434f4ab6ffeb0731290c73622c26118856bdd39b2f4f6ca9cb2f\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 01:20:49.085208 containerd[1479]: time="2026-04-16T01:20:49.084496166Z" level=info msg="CreateContainer within sandbox \"ad07eb38ec65434f4ab6ffeb0731290c73622c26118856bdd39b2f4f6ca9cb2f\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"acfb7971a9358d90ebe4309e97820176c5a7efc2e196db2cc5ee1c0b55082036\"" Apr 16 01:20:49.088407 containerd[1479]: time="2026-04-16T01:20:49.087389276Z" level=info msg="StartContainer for \"acfb7971a9358d90ebe4309e97820176c5a7efc2e196db2cc5ee1c0b55082036\"" Apr 16 01:20:49.392057 systemd[1]: Started cri-containerd-acfb7971a9358d90ebe4309e97820176c5a7efc2e196db2cc5ee1c0b55082036.scope - libcontainer container acfb7971a9358d90ebe4309e97820176c5a7efc2e196db2cc5ee1c0b55082036. Apr 16 01:20:49.531413 containerd[1479]: time="2026-04-16T01:20:49.531367243Z" level=info msg="StartContainer for \"acfb7971a9358d90ebe4309e97820176c5a7efc2e196db2cc5ee1c0b55082036\" returns successfully" Apr 16 01:20:49.588905 systemd[1]: run-containerd-runc-k8s.io-acfb7971a9358d90ebe4309e97820176c5a7efc2e196db2cc5ee1c0b55082036-runc.6xUb3V.mount: Deactivated successfully. Apr 16 01:20:49.705288 systemd[1]: cri-containerd-acfb7971a9358d90ebe4309e97820176c5a7efc2e196db2cc5ee1c0b55082036.scope: Deactivated successfully. Apr 16 01:20:49.729187 kubelet[2548]: E0416 01:20:49.728224 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:49.792255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acfb7971a9358d90ebe4309e97820176c5a7efc2e196db2cc5ee1c0b55082036-rootfs.mount: Deactivated successfully. Apr 16 01:20:49.817529 containerd[1479]: time="2026-04-16T01:20:49.816273602Z" level=info msg="shim disconnected" id=acfb7971a9358d90ebe4309e97820176c5a7efc2e196db2cc5ee1c0b55082036 namespace=k8s.io Apr 16 01:20:49.817529 containerd[1479]: time="2026-04-16T01:20:49.816330912Z" level=warning msg="cleaning up after shim disconnected" id=acfb7971a9358d90ebe4309e97820176c5a7efc2e196db2cc5ee1c0b55082036 namespace=k8s.io Apr 16 01:20:49.817529 containerd[1479]: time="2026-04-16T01:20:49.816337628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 01:20:50.577570 containerd[1479]: time="2026-04-16T01:20:50.577114623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 01:20:51.727879 kubelet[2548]: E0416 01:20:51.727375 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:53.729068 kubelet[2548]: E0416 01:20:53.726923 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:55.726972 kubelet[2548]: E0416 01:20:55.726552 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:57.728053 kubelet[2548]: E0416 01:20:57.727399 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:59.635494 containerd[1479]: time="2026-04-16T01:20:59.635042816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 16 01:20:59.659094 containerd[1479]: time="2026-04-16T01:20:59.659034792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:20:59.661896 containerd[1479]: time="2026-04-16T01:20:59.661441495Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:20:59.662356 containerd[1479]: time="2026-04-16T01:20:59.662232140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:20:59.667084 containerd[1479]: time="2026-04-16T01:20:59.666529896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 9.089348233s" Apr 16 01:20:59.667084 containerd[1479]: time="2026-04-16T01:20:59.666963898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 16 01:20:59.695337 containerd[1479]: time="2026-04-16T01:20:59.694517293Z" level=info msg="CreateContainer within sandbox \"ad07eb38ec65434f4ab6ffeb0731290c73622c26118856bdd39b2f4f6ca9cb2f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 01:20:59.727316 kubelet[2548]: E0416 01:20:59.726468 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:20:59.739537 containerd[1479]: time="2026-04-16T01:20:59.735597959Z" level=info msg="CreateContainer within sandbox \"ad07eb38ec65434f4ab6ffeb0731290c73622c26118856bdd39b2f4f6ca9cb2f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"75d6a432b9f1c6fcf8a0ba139dbf10ef409294b7d136f7f45ae44d0205362739\"" Apr 16 01:20:59.741109 containerd[1479]: time="2026-04-16T01:20:59.741045291Z" level=info msg="StartContainer for \"75d6a432b9f1c6fcf8a0ba139dbf10ef409294b7d136f7f45ae44d0205362739\"" Apr 16 01:20:59.865192 systemd[1]: Started cri-containerd-75d6a432b9f1c6fcf8a0ba139dbf10ef409294b7d136f7f45ae44d0205362739.scope - libcontainer container 75d6a432b9f1c6fcf8a0ba139dbf10ef409294b7d136f7f45ae44d0205362739. Apr 16 01:20:59.987593 containerd[1479]: time="2026-04-16T01:20:59.987121287Z" level=info msg="StartContainer for \"75d6a432b9f1c6fcf8a0ba139dbf10ef409294b7d136f7f45ae44d0205362739\" returns successfully" Apr 16 01:21:01.406056 systemd[1]: cri-containerd-75d6a432b9f1c6fcf8a0ba139dbf10ef409294b7d136f7f45ae44d0205362739.scope: Deactivated successfully. Apr 16 01:21:01.406337 systemd[1]: cri-containerd-75d6a432b9f1c6fcf8a0ba139dbf10ef409294b7d136f7f45ae44d0205362739.scope: Consumed 1.939s CPU time. Apr 16 01:21:01.468045 kubelet[2548]: I0416 01:21:01.467509 2548 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 16 01:21:01.521983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75d6a432b9f1c6fcf8a0ba139dbf10ef409294b7d136f7f45ae44d0205362739-rootfs.mount: Deactivated successfully. Apr 16 01:21:01.559130 containerd[1479]: time="2026-04-16T01:21:01.558513732Z" level=info msg="shim disconnected" id=75d6a432b9f1c6fcf8a0ba139dbf10ef409294b7d136f7f45ae44d0205362739 namespace=k8s.io Apr 16 01:21:01.559130 containerd[1479]: time="2026-04-16T01:21:01.559021271Z" level=warning msg="cleaning up after shim disconnected" id=75d6a432b9f1c6fcf8a0ba139dbf10ef409294b7d136f7f45ae44d0205362739 namespace=k8s.io Apr 16 01:21:01.559130 containerd[1479]: time="2026-04-16T01:21:01.559034905Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 01:21:01.740365 kubelet[2548]: I0416 01:21:01.736208 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394e94ea-67c5-464b-b3aa-188a9710b888-config\") pod \"goldmane-9f7667bb8-t5lq8\" (UID: \"394e94ea-67c5-464b-b3aa-188a9710b888\") " pod="calico-system/goldmane-9f7667bb8-t5lq8" Apr 16 01:21:01.740365 kubelet[2548]: I0416 01:21:01.736362 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fw28\" (UniqueName: \"kubernetes.io/projected/925d015b-3f96-4cb8-a9c1-64b9f1a67c52-kube-api-access-6fw28\") pod \"coredns-7d764666f9-kv5bx\" (UID: \"925d015b-3f96-4cb8-a9c1-64b9f1a67c52\") " pod="kube-system/coredns-7d764666f9-kv5bx" Apr 16 01:21:01.740365 kubelet[2548]: I0416 01:21:01.736383 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwbxr\" (UniqueName: \"kubernetes.io/projected/822d1a89-65ea-4c5d-be46-0a8d4c12be69-kube-api-access-mwbxr\") pod \"calico-kube-controllers-67f84cb8bf-5s4sf\" (UID: \"822d1a89-65ea-4c5d-be46-0a8d4c12be69\") " pod="calico-system/calico-kube-controllers-67f84cb8bf-5s4sf" Apr 16 01:21:01.740365 kubelet[2548]: I0416 01:21:01.736398 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/394e94ea-67c5-464b-b3aa-188a9710b888-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-t5lq8\" (UID: \"394e94ea-67c5-464b-b3aa-188a9710b888\") " pod="calico-system/goldmane-9f7667bb8-t5lq8" Apr 16 01:21:01.740365 kubelet[2548]: I0416 01:21:01.736409 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v45mh\" (UniqueName: \"kubernetes.io/projected/394e94ea-67c5-464b-b3aa-188a9710b888-kube-api-access-v45mh\") pod \"goldmane-9f7667bb8-t5lq8\" (UID: \"394e94ea-67c5-464b-b3aa-188a9710b888\") " pod="calico-system/goldmane-9f7667bb8-t5lq8" Apr 16 01:21:01.742198 kubelet[2548]: I0416 01:21:01.736421 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/822d1a89-65ea-4c5d-be46-0a8d4c12be69-tigera-ca-bundle\") pod \"calico-kube-controllers-67f84cb8bf-5s4sf\" (UID: \"822d1a89-65ea-4c5d-be46-0a8d4c12be69\") " pod="calico-system/calico-kube-controllers-67f84cb8bf-5s4sf" Apr 16 01:21:01.742198 kubelet[2548]: I0416 01:21:01.736450 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7dc182c6-b5fb-40af-9b7a-573725ca7063-whisker-backend-key-pair\") pod \"whisker-7cdb99654c-h52cj\" (UID: \"7dc182c6-b5fb-40af-9b7a-573725ca7063\") " pod="calico-system/whisker-7cdb99654c-h52cj" Apr 16 01:21:01.742198 kubelet[2548]: I0416 01:21:01.736463 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7dc182c6-b5fb-40af-9b7a-573725ca7063-nginx-config\") pod \"whisker-7cdb99654c-h52cj\" (UID: \"7dc182c6-b5fb-40af-9b7a-573725ca7063\") " pod="calico-system/whisker-7cdb99654c-h52cj" Apr 16 01:21:01.742198 kubelet[2548]: I0416 01:21:01.736474 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/794b6563-cebd-4102-a96c-e20a75901f97-calico-apiserver-certs\") pod \"calico-apiserver-8584db774c-nfctt\" (UID: \"794b6563-cebd-4102-a96c-e20a75901f97\") " pod="calico-system/calico-apiserver-8584db774c-nfctt" Apr 16 01:21:01.742198 kubelet[2548]: I0416 01:21:01.736488 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/925d015b-3f96-4cb8-a9c1-64b9f1a67c52-config-volume\") pod \"coredns-7d764666f9-kv5bx\" (UID: \"925d015b-3f96-4cb8-a9c1-64b9f1a67c52\") " pod="kube-system/coredns-7d764666f9-kv5bx" Apr 16 01:21:01.742317 kubelet[2548]: I0416 01:21:01.736500 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmw7k\" (UniqueName: \"kubernetes.io/projected/7dc182c6-b5fb-40af-9b7a-573725ca7063-kube-api-access-wmw7k\") pod \"whisker-7cdb99654c-h52cj\" (UID: \"7dc182c6-b5fb-40af-9b7a-573725ca7063\") " pod="calico-system/whisker-7cdb99654c-h52cj" Apr 16 01:21:01.746422 kubelet[2548]: I0416 01:21:01.736537 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dc182c6-b5fb-40af-9b7a-573725ca7063-whisker-ca-bundle\") pod \"whisker-7cdb99654c-h52cj\" (UID: \"7dc182c6-b5fb-40af-9b7a-573725ca7063\") " pod="calico-system/whisker-7cdb99654c-h52cj" Apr 16 01:21:01.746422 kubelet[2548]: I0416 01:21:01.746110 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxzr9\" (UniqueName: \"kubernetes.io/projected/794b6563-cebd-4102-a96c-e20a75901f97-kube-api-access-dxzr9\") pod \"calico-apiserver-8584db774c-nfctt\" (UID: \"794b6563-cebd-4102-a96c-e20a75901f97\") " pod="calico-system/calico-apiserver-8584db774c-nfctt" Apr 16 01:21:01.746422 kubelet[2548]: I0416 01:21:01.746151 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/394e94ea-67c5-464b-b3aa-188a9710b888-goldmane-key-pair\") pod \"goldmane-9f7667bb8-t5lq8\" (UID: \"394e94ea-67c5-464b-b3aa-188a9710b888\") " pod="calico-system/goldmane-9f7667bb8-t5lq8" Apr 16 01:21:01.749030 systemd[1]: Created slice kubepods-besteffort-pod822d1a89_65ea_4c5d_be46_0a8d4c12be69.slice - libcontainer container kubepods-besteffort-pod822d1a89_65ea_4c5d_be46_0a8d4c12be69.slice. Apr 16 01:21:01.765172 systemd[1]: Created slice kubepods-besteffort-pod794b6563_cebd_4102_a96c_e20a75901f97.slice - libcontainer container kubepods-besteffort-pod794b6563_cebd_4102_a96c_e20a75901f97.slice. Apr 16 01:21:01.790297 systemd[1]: Created slice kubepods-burstable-pod925d015b_3f96_4cb8_a9c1_64b9f1a67c52.slice - libcontainer container kubepods-burstable-pod925d015b_3f96_4cb8_a9c1_64b9f1a67c52.slice. Apr 16 01:21:01.817322 systemd[1]: Created slice kubepods-besteffort-pod7dc182c6_b5fb_40af_9b7a_573725ca7063.slice - libcontainer container kubepods-besteffort-pod7dc182c6_b5fb_40af_9b7a_573725ca7063.slice. Apr 16 01:21:01.839497 systemd[1]: Created slice kubepods-besteffort-pod394e94ea_67c5_464b_b3aa_188a9710b888.slice - libcontainer container kubepods-besteffort-pod394e94ea_67c5_464b_b3aa_188a9710b888.slice. Apr 16 01:21:01.847549 kubelet[2548]: I0416 01:21:01.847256 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b5113220-c80b-4e1b-afea-3fb7f3d652bc-calico-apiserver-certs\") pod \"calico-apiserver-8584db774c-7qbkw\" (UID: \"b5113220-c80b-4e1b-afea-3fb7f3d652bc\") " pod="calico-system/calico-apiserver-8584db774c-7qbkw" Apr 16 01:21:01.847549 kubelet[2548]: I0416 01:21:01.847431 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzrxt\" (UniqueName: \"kubernetes.io/projected/b5113220-c80b-4e1b-afea-3fb7f3d652bc-kube-api-access-gzrxt\") pod \"calico-apiserver-8584db774c-7qbkw\" (UID: \"b5113220-c80b-4e1b-afea-3fb7f3d652bc\") " pod="calico-system/calico-apiserver-8584db774c-7qbkw" Apr 16 01:21:01.856082 kubelet[2548]: I0416 01:21:01.852571 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2hp9\" (UniqueName: \"kubernetes.io/projected/9471422e-1a11-4a68-a210-42dc7d4df58a-kube-api-access-q2hp9\") pod \"coredns-7d764666f9-ss6nt\" (UID: \"9471422e-1a11-4a68-a210-42dc7d4df58a\") " pod="kube-system/coredns-7d764666f9-ss6nt" Apr 16 01:21:01.856082 kubelet[2548]: I0416 01:21:01.853046 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9471422e-1a11-4a68-a210-42dc7d4df58a-config-volume\") pod \"coredns-7d764666f9-ss6nt\" (UID: \"9471422e-1a11-4a68-a210-42dc7d4df58a\") " pod="kube-system/coredns-7d764666f9-ss6nt" Apr 16 01:21:01.881279 systemd[1]: Created slice kubepods-besteffort-podb5113220_c80b_4e1b_afea_3fb7f3d652bc.slice - libcontainer container kubepods-besteffort-podb5113220_c80b_4e1b_afea_3fb7f3d652bc.slice. Apr 16 01:21:01.930459 systemd[1]: Created slice kubepods-burstable-pod9471422e_1a11_4a68_a210_42dc7d4df58a.slice - libcontainer container kubepods-burstable-pod9471422e_1a11_4a68_a210_42dc7d4df58a.slice. Apr 16 01:21:01.994235 systemd[1]: Created slice kubepods-besteffort-pod36e0b1c3_35d3_4f7d_a631_c6ac0e723311.slice - libcontainer container kubepods-besteffort-pod36e0b1c3_35d3_4f7d_a631_c6ac0e723311.slice. Apr 16 01:21:02.030987 containerd[1479]: time="2026-04-16T01:21:02.030424672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2mtc,Uid:36e0b1c3-35d3-4f7d-a631-c6ac0e723311,Namespace:calico-system,Attempt:0,}" Apr 16 01:21:02.068932 containerd[1479]: time="2026-04-16T01:21:02.068249583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f84cb8bf-5s4sf,Uid:822d1a89-65ea-4c5d-be46-0a8d4c12be69,Namespace:calico-system,Attempt:0,}" Apr 16 01:21:02.089224 containerd[1479]: time="2026-04-16T01:21:02.088379858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8584db774c-nfctt,Uid:794b6563-cebd-4102-a96c-e20a75901f97,Namespace:calico-system,Attempt:0,}" Apr 16 01:21:02.111980 kubelet[2548]: E0416 01:21:02.111570 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:02.122561 containerd[1479]: time="2026-04-16T01:21:02.119568139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kv5bx,Uid:925d015b-3f96-4cb8-a9c1-64b9f1a67c52,Namespace:kube-system,Attempt:0,}" Apr 16 01:21:02.143369 containerd[1479]: time="2026-04-16T01:21:02.142858407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cdb99654c-h52cj,Uid:7dc182c6-b5fb-40af-9b7a-573725ca7063,Namespace:calico-system,Attempt:0,}" Apr 16 01:21:02.169202 containerd[1479]: time="2026-04-16T01:21:02.166431975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-t5lq8,Uid:394e94ea-67c5-464b-b3aa-188a9710b888,Namespace:calico-system,Attempt:0,}" Apr 16 01:21:02.213525 containerd[1479]: time="2026-04-16T01:21:02.213119918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8584db774c-7qbkw,Uid:b5113220-c80b-4e1b-afea-3fb7f3d652bc,Namespace:calico-system,Attempt:0,}" Apr 16 01:21:02.264461 kubelet[2548]: E0416 01:21:02.263233 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:02.316986 containerd[1479]: time="2026-04-16T01:21:02.316931218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ss6nt,Uid:9471422e-1a11-4a68-a210-42dc7d4df58a,Namespace:kube-system,Attempt:0,}" Apr 16 01:21:02.928849 containerd[1479]: time="2026-04-16T01:21:02.927430404Z" level=info msg="CreateContainer within sandbox \"ad07eb38ec65434f4ab6ffeb0731290c73622c26118856bdd39b2f4f6ca9cb2f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 01:21:03.048876 containerd[1479]: time="2026-04-16T01:21:03.048511975Z" level=info msg="CreateContainer within sandbox \"ad07eb38ec65434f4ab6ffeb0731290c73622c26118856bdd39b2f4f6ca9cb2f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d5e02013b3c46f1e812134a7b7a4843b0ad370c9fa7e8a9681876e6ad9ab5c7a\"" Apr 16 01:21:03.062119 containerd[1479]: time="2026-04-16T01:21:03.059961525Z" level=info msg="StartContainer for \"d5e02013b3c46f1e812134a7b7a4843b0ad370c9fa7e8a9681876e6ad9ab5c7a\"" Apr 16 01:21:03.095824 containerd[1479]: time="2026-04-16T01:21:03.092653872Z" level=error msg="Failed to destroy network for sandbox \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.096410 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8-shm.mount: Deactivated successfully. Apr 16 01:21:03.107401 containerd[1479]: time="2026-04-16T01:21:03.107231451Z" level=error msg="encountered an error cleaning up failed sandbox \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.107401 containerd[1479]: time="2026-04-16T01:21:03.107427256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f84cb8bf-5s4sf,Uid:822d1a89-65ea-4c5d-be46-0a8d4c12be69,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.115276 containerd[1479]: time="2026-04-16T01:21:03.114517131Z" level=error msg="Failed to destroy network for sandbox \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.121882 containerd[1479]: time="2026-04-16T01:21:03.118124650Z" level=error msg="encountered an error cleaning up failed sandbox \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.121882 containerd[1479]: time="2026-04-16T01:21:03.118202975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kv5bx,Uid:925d015b-3f96-4cb8-a9c1-64b9f1a67c52,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.128848 containerd[1479]: time="2026-04-16T01:21:03.128165867Z" level=error msg="Failed to destroy network for sandbox \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.133840 containerd[1479]: time="2026-04-16T01:21:03.133540975Z" level=error msg="encountered an error cleaning up failed sandbox \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.136908 containerd[1479]: time="2026-04-16T01:21:03.135005628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2mtc,Uid:36e0b1c3-35d3-4f7d-a631-c6ac0e723311,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.150805 kubelet[2548]: E0416 01:21:03.147517 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.150805 kubelet[2548]: E0416 01:21:03.148381 2548 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b2mtc" Apr 16 01:21:03.150805 kubelet[2548]: E0416 01:21:03.148482 2548 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b2mtc" Apr 16 01:21:03.153852 kubelet[2548]: E0416 01:21:03.148885 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b2mtc_calico-system(36e0b1c3-35d3-4f7d-a631-c6ac0e723311)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b2mtc_calico-system(36e0b1c3-35d3-4f7d-a631-c6ac0e723311)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:21:03.153852 kubelet[2548]: E0416 01:21:03.149460 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.153852 kubelet[2548]: E0416 01:21:03.149498 2548 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67f84cb8bf-5s4sf" Apr 16 01:21:03.154122 kubelet[2548]: E0416 01:21:03.149513 2548 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67f84cb8bf-5s4sf" Apr 16 01:21:03.154122 kubelet[2548]: E0416 01:21:03.149977 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.154122 kubelet[2548]: E0416 01:21:03.150056 2548 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-kv5bx" Apr 16 01:21:03.154122 kubelet[2548]: E0416 01:21:03.150076 2548 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-kv5bx" Apr 16 01:21:03.154227 kubelet[2548]: E0416 01:21:03.150114 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-kv5bx_kube-system(925d015b-3f96-4cb8-a9c1-64b9f1a67c52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-kv5bx_kube-system(925d015b-3f96-4cb8-a9c1-64b9f1a67c52)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-kv5bx" podUID="925d015b-3f96-4cb8-a9c1-64b9f1a67c52" Apr 16 01:21:03.154227 kubelet[2548]: E0416 01:21:03.151908 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67f84cb8bf-5s4sf_calico-system(822d1a89-65ea-4c5d-be46-0a8d4c12be69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67f84cb8bf-5s4sf_calico-system(822d1a89-65ea-4c5d-be46-0a8d4c12be69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67f84cb8bf-5s4sf" podUID="822d1a89-65ea-4c5d-be46-0a8d4c12be69" Apr 16 01:21:03.363068 containerd[1479]: time="2026-04-16T01:21:03.362448425Z" level=error msg="Failed to destroy network for sandbox \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.363971 containerd[1479]: time="2026-04-16T01:21:03.363294049Z" level=error msg="encountered an error cleaning up failed sandbox \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.363971 containerd[1479]: time="2026-04-16T01:21:03.363444504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8584db774c-nfctt,Uid:794b6563-cebd-4102-a96c-e20a75901f97,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.364995 kubelet[2548]: E0416 01:21:03.364193 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.364995 kubelet[2548]: E0416 01:21:03.364246 2548 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8584db774c-nfctt" Apr 16 01:21:03.364995 kubelet[2548]: E0416 01:21:03.364261 2548 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8584db774c-nfctt" Apr 16 01:21:03.365119 kubelet[2548]: E0416 01:21:03.364308 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8584db774c-nfctt_calico-system(794b6563-cebd-4102-a96c-e20a75901f97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8584db774c-nfctt_calico-system(794b6563-cebd-4102-a96c-e20a75901f97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-8584db774c-nfctt" podUID="794b6563-cebd-4102-a96c-e20a75901f97" Apr 16 01:21:03.367413 containerd[1479]: time="2026-04-16T01:21:03.367189386Z" level=error msg="Failed to destroy network for sandbox \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.370478 containerd[1479]: time="2026-04-16T01:21:03.368847803Z" level=error msg="encountered an error cleaning up failed sandbox \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.370478 containerd[1479]: time="2026-04-16T01:21:03.368969283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cdb99654c-h52cj,Uid:7dc182c6-b5fb-40af-9b7a-573725ca7063,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.377942 kubelet[2548]: E0416 01:21:03.376005 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.377942 kubelet[2548]: E0416 01:21:03.376207 2548 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cdb99654c-h52cj" Apr 16 01:21:03.377942 kubelet[2548]: E0416 01:21:03.376239 2548 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cdb99654c-h52cj" Apr 16 01:21:03.380397 kubelet[2548]: E0416 01:21:03.376319 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7cdb99654c-h52cj_calico-system(7dc182c6-b5fb-40af-9b7a-573725ca7063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7cdb99654c-h52cj_calico-system(7dc182c6-b5fb-40af-9b7a-573725ca7063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7cdb99654c-h52cj" podUID="7dc182c6-b5fb-40af-9b7a-573725ca7063" Apr 16 01:21:03.382029 containerd[1479]: time="2026-04-16T01:21:03.381117645Z" level=error msg="Failed to destroy network for sandbox \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.388882 containerd[1479]: time="2026-04-16T01:21:03.385863915Z" level=error msg="encountered an error cleaning up failed sandbox \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.388882 containerd[1479]: time="2026-04-16T01:21:03.385953919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8584db774c-7qbkw,Uid:b5113220-c80b-4e1b-afea-3fb7f3d652bc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.389497 kubelet[2548]: E0416 01:21:03.386271 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.389497 kubelet[2548]: E0416 01:21:03.386375 2548 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8584db774c-7qbkw" Apr 16 01:21:03.389497 kubelet[2548]: E0416 01:21:03.386402 2548 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8584db774c-7qbkw" Apr 16 01:21:03.391088 kubelet[2548]: E0416 01:21:03.386469 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8584db774c-7qbkw_calico-system(b5113220-c80b-4e1b-afea-3fb7f3d652bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8584db774c-7qbkw_calico-system(b5113220-c80b-4e1b-afea-3fb7f3d652bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-8584db774c-7qbkw" podUID="b5113220-c80b-4e1b-afea-3fb7f3d652bc" Apr 16 01:21:03.391319 containerd[1479]: time="2026-04-16T01:21:03.390420034Z" level=error msg="Failed to destroy network for sandbox \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.392155 containerd[1479]: time="2026-04-16T01:21:03.391935748Z" level=error msg="Failed to destroy network for sandbox \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.395890 containerd[1479]: time="2026-04-16T01:21:03.393460370Z" level=error msg="encountered an error cleaning up failed sandbox \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.395890 containerd[1479]: time="2026-04-16T01:21:03.393525700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ss6nt,Uid:9471422e-1a11-4a68-a210-42dc7d4df58a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.395890 containerd[1479]: time="2026-04-16T01:21:03.395080095Z" level=error msg="encountered an error cleaning up failed sandbox \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.395890 containerd[1479]: time="2026-04-16T01:21:03.395121491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-t5lq8,Uid:394e94ea-67c5-464b-b3aa-188a9710b888,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.396250 kubelet[2548]: E0416 01:21:03.394063 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.396250 kubelet[2548]: E0416 01:21:03.394160 2548 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-ss6nt" Apr 16 01:21:03.396250 kubelet[2548]: E0416 01:21:03.394181 2548 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-ss6nt" Apr 16 01:21:03.396316 kubelet[2548]: E0416 01:21:03.394250 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-ss6nt_kube-system(9471422e-1a11-4a68-a210-42dc7d4df58a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-ss6nt_kube-system(9471422e-1a11-4a68-a210-42dc7d4df58a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-ss6nt" podUID="9471422e-1a11-4a68-a210-42dc7d4df58a" Apr 16 01:21:03.398406 kubelet[2548]: E0416 01:21:03.397664 2548 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:03.398406 kubelet[2548]: E0416 01:21:03.398033 2548 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-t5lq8" Apr 16 01:21:03.398406 kubelet[2548]: E0416 01:21:03.398053 2548 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-t5lq8" Apr 16 01:21:03.398504 kubelet[2548]: E0416 01:21:03.398094 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-t5lq8_calico-system(394e94ea-67c5-464b-b3aa-188a9710b888)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-t5lq8_calico-system(394e94ea-67c5-464b-b3aa-188a9710b888)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-t5lq8" podUID="394e94ea-67c5-464b-b3aa-188a9710b888" Apr 16 01:21:03.407339 systemd[1]: Started cri-containerd-d5e02013b3c46f1e812134a7b7a4843b0ad370c9fa7e8a9681876e6ad9ab5c7a.scope - libcontainer container d5e02013b3c46f1e812134a7b7a4843b0ad370c9fa7e8a9681876e6ad9ab5c7a. Apr 16 01:21:03.523916 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642-shm.mount: Deactivated successfully. Apr 16 01:21:03.523995 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b-shm.mount: Deactivated successfully. Apr 16 01:21:03.524038 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640-shm.mount: Deactivated successfully. Apr 16 01:21:03.524076 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c-shm.mount: Deactivated successfully. Apr 16 01:21:03.524116 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d-shm.mount: Deactivated successfully. Apr 16 01:21:03.524154 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64-shm.mount: Deactivated successfully. Apr 16 01:21:03.524190 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e-shm.mount: Deactivated successfully. Apr 16 01:21:03.569657 containerd[1479]: time="2026-04-16T01:21:03.569258163Z" level=info msg="StartContainer for \"d5e02013b3c46f1e812134a7b7a4843b0ad370c9fa7e8a9681876e6ad9ab5c7a\" returns successfully" Apr 16 01:21:03.835647 kubelet[2548]: I0416 01:21:03.834097 2548 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Apr 16 01:21:03.859220 kubelet[2548]: I0416 01:21:03.857385 2548 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Apr 16 01:21:03.886921 kubelet[2548]: I0416 01:21:03.886118 2548 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Apr 16 01:21:03.888489 containerd[1479]: time="2026-04-16T01:21:03.888133621Z" level=info msg="StopPodSandbox for \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\"" Apr 16 01:21:03.892215 containerd[1479]: time="2026-04-16T01:21:03.889415350Z" level=info msg="StopPodSandbox for \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\"" Apr 16 01:21:03.892215 containerd[1479]: time="2026-04-16T01:21:03.891939048Z" level=info msg="Ensure that sandbox bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642 in task-service has been cleanup successfully" Apr 16 01:21:03.893066 containerd[1479]: time="2026-04-16T01:21:03.892412946Z" level=info msg="Ensure that sandbox 111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b in task-service has been cleanup successfully" Apr 16 01:21:03.896920 containerd[1479]: time="2026-04-16T01:21:03.896413040Z" level=info msg="StopPodSandbox for \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\"" Apr 16 01:21:03.897481 containerd[1479]: time="2026-04-16T01:21:03.897459339Z" level=info msg="Ensure that sandbox dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d in task-service has been cleanup successfully" Apr 16 01:21:03.908663 kubelet[2548]: I0416 01:21:03.907344 2548 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Apr 16 01:21:03.910840 containerd[1479]: time="2026-04-16T01:21:03.910200264Z" level=info msg="StopPodSandbox for \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\"" Apr 16 01:21:03.927257 containerd[1479]: time="2026-04-16T01:21:03.927224618Z" level=info msg="Ensure that sandbox a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8 in task-service has been cleanup successfully" Apr 16 01:21:03.968311 kubelet[2548]: I0416 01:21:03.968076 2548 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Apr 16 01:21:03.977959 containerd[1479]: time="2026-04-16T01:21:03.977920816Z" level=info msg="StopPodSandbox for \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\"" Apr 16 01:21:03.978516 containerd[1479]: time="2026-04-16T01:21:03.978494355Z" level=info msg="Ensure that sandbox 464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64 in task-service has been cleanup successfully" Apr 16 01:21:03.993974 kubelet[2548]: I0416 01:21:03.993076 2548 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Apr 16 01:21:03.997063 containerd[1479]: time="2026-04-16T01:21:03.997033894Z" level=info msg="StopPodSandbox for \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\"" Apr 16 01:21:03.998374 containerd[1479]: time="2026-04-16T01:21:03.998239503Z" level=info msg="Ensure that sandbox 21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e in task-service has been cleanup successfully" Apr 16 01:21:04.039997 kubelet[2548]: I0416 01:21:04.039475 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-z8n4h" podStartSLOduration=3.089082394 podStartE2EDuration="41.039428806s" podCreationTimestamp="2026-04-16 01:20:23 +0000 UTC" firstStartedPulling="2026-04-16 01:20:24.805137296 +0000 UTC m=+26.497271579" lastFinishedPulling="2026-04-16 01:21:02.755483701 +0000 UTC m=+64.447617991" observedRunningTime="2026-04-16 01:21:04.026464992 +0000 UTC m=+65.718599279" watchObservedRunningTime="2026-04-16 01:21:04.039428806 +0000 UTC m=+65.731563103" Apr 16 01:21:04.055444 kubelet[2548]: I0416 01:21:04.055281 2548 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Apr 16 01:21:04.059004 containerd[1479]: time="2026-04-16T01:21:04.058976082Z" level=info msg="StopPodSandbox for \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\"" Apr 16 01:21:04.059217 containerd[1479]: time="2026-04-16T01:21:04.059205855Z" level=info msg="Ensure that sandbox d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640 in task-service has been cleanup successfully" Apr 16 01:21:04.083434 kubelet[2548]: I0416 01:21:04.083337 2548 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Apr 16 01:21:04.086970 containerd[1479]: time="2026-04-16T01:21:04.086052822Z" level=info msg="StopPodSandbox for \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\"" Apr 16 01:21:04.086970 containerd[1479]: time="2026-04-16T01:21:04.086337915Z" level=info msg="Ensure that sandbox 21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c in task-service has been cleanup successfully" Apr 16 01:21:04.188228 containerd[1479]: time="2026-04-16T01:21:04.188110324Z" level=error msg="StopPodSandbox for \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\" failed" error="failed to destroy network for sandbox \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:04.190107 kubelet[2548]: E0416 01:21:04.189648 2548 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Apr 16 01:21:04.190107 kubelet[2548]: E0416 01:21:04.189901 2548 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b"} Apr 16 01:21:04.190107 kubelet[2548]: E0416 01:21:04.190011 2548 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5113220-c80b-4e1b-afea-3fb7f3d652bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:21:04.190107 kubelet[2548]: E0416 01:21:04.190047 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5113220-c80b-4e1b-afea-3fb7f3d652bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-8584db774c-7qbkw" podUID="b5113220-c80b-4e1b-afea-3fb7f3d652bc" Apr 16 01:21:04.382656 containerd[1479]: time="2026-04-16T01:21:04.380357111Z" level=error msg="StopPodSandbox for \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\" failed" error="failed to destroy network for sandbox \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:04.383429 kubelet[2548]: E0416 01:21:04.381432 2548 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Apr 16 01:21:04.383429 kubelet[2548]: E0416 01:21:04.381650 2548 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642"} Apr 16 01:21:04.384994 kubelet[2548]: E0416 01:21:04.384378 2548 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9471422e-1a11-4a68-a210-42dc7d4df58a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:21:04.384994 kubelet[2548]: E0416 01:21:04.384455 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9471422e-1a11-4a68-a210-42dc7d4df58a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-ss6nt" podUID="9471422e-1a11-4a68-a210-42dc7d4df58a" Apr 16 01:21:04.405462 containerd[1479]: time="2026-04-16T01:21:04.405418362Z" level=error msg="StopPodSandbox for \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\" failed" error="failed to destroy network for sandbox \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:04.407848 kubelet[2548]: E0416 01:21:04.406388 2548 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Apr 16 01:21:04.407848 kubelet[2548]: E0416 01:21:04.406435 2548 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e"} Apr 16 01:21:04.407848 kubelet[2548]: E0416 01:21:04.406459 2548 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36e0b1c3-35d3-4f7d-a631-c6ac0e723311\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:21:04.407848 kubelet[2548]: E0416 01:21:04.406485 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36e0b1c3-35d3-4f7d-a631-c6ac0e723311\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b2mtc" podUID="36e0b1c3-35d3-4f7d-a631-c6ac0e723311" Apr 16 01:21:04.439496 containerd[1479]: time="2026-04-16T01:21:04.439341197Z" level=error msg="StopPodSandbox for \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\" failed" error="failed to destroy network for sandbox \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:04.444826 kubelet[2548]: E0416 01:21:04.443279 2548 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Apr 16 01:21:04.444826 kubelet[2548]: E0416 01:21:04.443374 2548 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64"} Apr 16 01:21:04.444826 kubelet[2548]: E0416 01:21:04.443414 2548 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"925d015b-3f96-4cb8-a9c1-64b9f1a67c52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:21:04.444826 kubelet[2548]: E0416 01:21:04.443442 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"925d015b-3f96-4cb8-a9c1-64b9f1a67c52\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-kv5bx" podUID="925d015b-3f96-4cb8-a9c1-64b9f1a67c52" Apr 16 01:21:04.445442 containerd[1479]: time="2026-04-16T01:21:04.444165991Z" level=error msg="StopPodSandbox for \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\" failed" error="failed to destroy network for sandbox \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:04.445476 kubelet[2548]: E0416 01:21:04.444982 2548 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Apr 16 01:21:04.445476 kubelet[2548]: E0416 01:21:04.445045 2548 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d"} Apr 16 01:21:04.445476 kubelet[2548]: E0416 01:21:04.445064 2548 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"794b6563-cebd-4102-a96c-e20a75901f97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:21:04.445476 kubelet[2548]: E0416 01:21:04.445082 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"794b6563-cebd-4102-a96c-e20a75901f97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-8584db774c-nfctt" podUID="794b6563-cebd-4102-a96c-e20a75901f97" Apr 16 01:21:04.451864 containerd[1479]: time="2026-04-16T01:21:04.451655103Z" level=error msg="StopPodSandbox for \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\" failed" error="failed to destroy network for sandbox \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:04.452965 kubelet[2548]: E0416 01:21:04.452841 2548 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Apr 16 01:21:04.452965 kubelet[2548]: E0416 01:21:04.452881 2548 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8"} Apr 16 01:21:04.452965 kubelet[2548]: E0416 01:21:04.452910 2548 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"822d1a89-65ea-4c5d-be46-0a8d4c12be69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:21:04.452965 kubelet[2548]: E0416 01:21:04.452932 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"822d1a89-65ea-4c5d-be46-0a8d4c12be69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67f84cb8bf-5s4sf" podUID="822d1a89-65ea-4c5d-be46-0a8d4c12be69" Apr 16 01:21:04.458809 containerd[1479]: time="2026-04-16T01:21:04.458232562Z" level=error msg="StopPodSandbox for \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\" failed" error="failed to destroy network for sandbox \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:04.460405 kubelet[2548]: E0416 01:21:04.459985 2548 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Apr 16 01:21:04.460405 kubelet[2548]: E0416 01:21:04.460159 2548 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c"} Apr 16 01:21:04.460405 kubelet[2548]: E0416 01:21:04.460189 2548 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7dc182c6-b5fb-40af-9b7a-573725ca7063\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:21:04.460405 kubelet[2548]: E0416 01:21:04.460228 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7dc182c6-b5fb-40af-9b7a-573725ca7063\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7cdb99654c-h52cj" podUID="7dc182c6-b5fb-40af-9b7a-573725ca7063" Apr 16 01:21:04.467784 containerd[1479]: time="2026-04-16T01:21:04.466249732Z" level=error msg="StopPodSandbox for \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\" failed" error="failed to destroy network for sandbox \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 01:21:04.470824 kubelet[2548]: E0416 01:21:04.469957 2548 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Apr 16 01:21:04.470824 kubelet[2548]: E0416 01:21:04.470126 2548 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640"} Apr 16 01:21:04.470824 kubelet[2548]: E0416 01:21:04.470157 2548 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"394e94ea-67c5-464b-b3aa-188a9710b888\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 16 01:21:04.470824 kubelet[2548]: E0416 01:21:04.470179 2548 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"394e94ea-67c5-464b-b3aa-188a9710b888\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-t5lq8" podUID="394e94ea-67c5-464b-b3aa-188a9710b888" Apr 16 01:21:05.097335 containerd[1479]: time="2026-04-16T01:21:05.091623894Z" level=info msg="StopPodSandbox for \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\"" Apr 16 01:21:05.173444 systemd[1]: run-containerd-runc-k8s.io-d5e02013b3c46f1e812134a7b7a4843b0ad370c9fa7e8a9681876e6ad9ab5c7a-runc.Ib1cmS.mount: Deactivated successfully. Apr 16 01:21:05.909203 containerd[1479]: 2026-04-16 01:21:05.660 [INFO][4006] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Apr 16 01:21:05.909203 containerd[1479]: 2026-04-16 01:21:05.662 [INFO][4006] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" iface="eth0" netns="/var/run/netns/cni-7cc8170e-9afe-1b5c-bf4a-063c760ec083" Apr 16 01:21:05.909203 containerd[1479]: 2026-04-16 01:21:05.663 [INFO][4006] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" iface="eth0" netns="/var/run/netns/cni-7cc8170e-9afe-1b5c-bf4a-063c760ec083" Apr 16 01:21:05.909203 containerd[1479]: 2026-04-16 01:21:05.667 [INFO][4006] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" iface="eth0" netns="/var/run/netns/cni-7cc8170e-9afe-1b5c-bf4a-063c760ec083" Apr 16 01:21:05.909203 containerd[1479]: 2026-04-16 01:21:05.667 [INFO][4006] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Apr 16 01:21:05.909203 containerd[1479]: 2026-04-16 01:21:05.668 [INFO][4006] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Apr 16 01:21:05.909203 containerd[1479]: 2026-04-16 01:21:05.856 [INFO][4036] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" HandleID="k8s-pod-network.21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Workload="localhost-k8s-whisker--7cdb99654c--h52cj-eth0" Apr 16 01:21:05.909203 containerd[1479]: 2026-04-16 01:21:05.860 [INFO][4036] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:05.909203 containerd[1479]: 2026-04-16 01:21:05.860 [INFO][4036] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:05.909203 containerd[1479]: 2026-04-16 01:21:05.881 [WARNING][4036] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" HandleID="k8s-pod-network.21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Workload="localhost-k8s-whisker--7cdb99654c--h52cj-eth0" Apr 16 01:21:05.909203 containerd[1479]: 2026-04-16 01:21:05.881 [INFO][4036] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" HandleID="k8s-pod-network.21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Workload="localhost-k8s-whisker--7cdb99654c--h52cj-eth0" Apr 16 01:21:05.909203 containerd[1479]: 2026-04-16 01:21:05.887 [INFO][4036] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:05.909203 containerd[1479]: 2026-04-16 01:21:05.905 [INFO][4006] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Apr 16 01:21:05.912132 containerd[1479]: time="2026-04-16T01:21:05.911358433Z" level=info msg="TearDown network for sandbox \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\" successfully" Apr 16 01:21:05.912132 containerd[1479]: time="2026-04-16T01:21:05.911395557Z" level=info msg="StopPodSandbox for \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\" returns successfully" Apr 16 01:21:05.911984 systemd[1]: run-netns-cni\x2d7cc8170e\x2d9afe\x2d1b5c\x2dbf4a\x2d063c760ec083.mount: Deactivated successfully. Apr 16 01:21:06.145022 kubelet[2548]: I0416 01:21:06.141645 2548 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/7dc182c6-b5fb-40af-9b7a-573725ca7063-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7dc182c6-b5fb-40af-9b7a-573725ca7063-whisker-backend-key-pair\") pod \"7dc182c6-b5fb-40af-9b7a-573725ca7063\" (UID: \"7dc182c6-b5fb-40af-9b7a-573725ca7063\") " Apr 16 01:21:06.145022 kubelet[2548]: I0416 01:21:06.142285 2548 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/7dc182c6-b5fb-40af-9b7a-573725ca7063-kube-api-access-wmw7k\" (UniqueName: \"kubernetes.io/projected/7dc182c6-b5fb-40af-9b7a-573725ca7063-kube-api-access-wmw7k\") pod \"7dc182c6-b5fb-40af-9b7a-573725ca7063\" (UID: \"7dc182c6-b5fb-40af-9b7a-573725ca7063\") " Apr 16 01:21:06.145022 kubelet[2548]: I0416 01:21:06.142318 2548 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/7dc182c6-b5fb-40af-9b7a-573725ca7063-nginx-config\" (UniqueName: \"kubernetes.io/configmap/7dc182c6-b5fb-40af-9b7a-573725ca7063-nginx-config\") pod \"7dc182c6-b5fb-40af-9b7a-573725ca7063\" (UID: \"7dc182c6-b5fb-40af-9b7a-573725ca7063\") " Apr 16 01:21:06.145022 kubelet[2548]: I0416 01:21:06.143269 2548 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/7dc182c6-b5fb-40af-9b7a-573725ca7063-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dc182c6-b5fb-40af-9b7a-573725ca7063-whisker-ca-bundle\") pod \"7dc182c6-b5fb-40af-9b7a-573725ca7063\" (UID: \"7dc182c6-b5fb-40af-9b7a-573725ca7063\") " Apr 16 01:21:06.145022 kubelet[2548]: I0416 01:21:06.144054 2548 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc182c6-b5fb-40af-9b7a-573725ca7063-whisker-ca-bundle" pod "7dc182c6-b5fb-40af-9b7a-573725ca7063" (UID: "7dc182c6-b5fb-40af-9b7a-573725ca7063"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 01:21:06.147287 kubelet[2548]: I0416 01:21:06.145377 2548 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc182c6-b5fb-40af-9b7a-573725ca7063-nginx-config" pod "7dc182c6-b5fb-40af-9b7a-573725ca7063" (UID: "7dc182c6-b5fb-40af-9b7a-573725ca7063"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 01:21:06.166182 systemd[1]: var-lib-kubelet-pods-7dc182c6\x2db5fb\x2d40af\x2d9b7a\x2d573725ca7063-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwmw7k.mount: Deactivated successfully. Apr 16 01:21:06.171012 kubelet[2548]: I0416 01:21:06.170422 2548 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc182c6-b5fb-40af-9b7a-573725ca7063-whisker-backend-key-pair" pod "7dc182c6-b5fb-40af-9b7a-573725ca7063" (UID: "7dc182c6-b5fb-40af-9b7a-573725ca7063"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 01:21:06.172268 kubelet[2548]: I0416 01:21:06.171302 2548 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc182c6-b5fb-40af-9b7a-573725ca7063-kube-api-access-wmw7k" pod "7dc182c6-b5fb-40af-9b7a-573725ca7063" (UID: "7dc182c6-b5fb-40af-9b7a-573725ca7063"). InnerVolumeSpecName "kube-api-access-wmw7k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 01:21:06.174479 systemd[1]: var-lib-kubelet-pods-7dc182c6\x2db5fb\x2d40af\x2d9b7a\x2d573725ca7063-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 01:21:06.246383 kubelet[2548]: I0416 01:21:06.245451 2548 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7dc182c6-b5fb-40af-9b7a-573725ca7063-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 16 01:21:06.246383 kubelet[2548]: I0416 01:21:06.246207 2548 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wmw7k\" (UniqueName: \"kubernetes.io/projected/7dc182c6-b5fb-40af-9b7a-573725ca7063-kube-api-access-wmw7k\") on node \"localhost\" DevicePath \"\"" Apr 16 01:21:06.246383 kubelet[2548]: I0416 01:21:06.246271 2548 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7dc182c6-b5fb-40af-9b7a-573725ca7063-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 16 01:21:06.246383 kubelet[2548]: I0416 01:21:06.246318 2548 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dc182c6-b5fb-40af-9b7a-573725ca7063-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 16 01:21:06.417226 systemd[1]: Removed slice kubepods-besteffort-pod7dc182c6_b5fb_40af_9b7a_573725ca7063.slice - libcontainer container kubepods-besteffort-pod7dc182c6_b5fb_40af_9b7a_573725ca7063.slice. Apr 16 01:21:06.655905 systemd[1]: Created slice kubepods-besteffort-pod9701d0e1_f269_440d_8408_b925ce2f3f61.slice - libcontainer container kubepods-besteffort-pod9701d0e1_f269_440d_8408_b925ce2f3f61.slice. Apr 16 01:21:06.657658 kubelet[2548]: I0416 01:21:06.657459 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9701d0e1-f269-440d-8408-b925ce2f3f61-whisker-ca-bundle\") pod \"whisker-5676f44d56-vdvxw\" (UID: \"9701d0e1-f269-440d-8408-b925ce2f3f61\") " pod="calico-system/whisker-5676f44d56-vdvxw" Apr 16 01:21:06.659282 kubelet[2548]: I0416 01:21:06.657949 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9701d0e1-f269-440d-8408-b925ce2f3f61-nginx-config\") pod \"whisker-5676f44d56-vdvxw\" (UID: \"9701d0e1-f269-440d-8408-b925ce2f3f61\") " pod="calico-system/whisker-5676f44d56-vdvxw" Apr 16 01:21:06.659282 kubelet[2548]: I0416 01:21:06.657978 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76fzv\" (UniqueName: \"kubernetes.io/projected/9701d0e1-f269-440d-8408-b925ce2f3f61-kube-api-access-76fzv\") pod \"whisker-5676f44d56-vdvxw\" (UID: \"9701d0e1-f269-440d-8408-b925ce2f3f61\") " pod="calico-system/whisker-5676f44d56-vdvxw" Apr 16 01:21:06.659282 kubelet[2548]: I0416 01:21:06.657998 2548 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9701d0e1-f269-440d-8408-b925ce2f3f61-whisker-backend-key-pair\") pod \"whisker-5676f44d56-vdvxw\" (UID: \"9701d0e1-f269-440d-8408-b925ce2f3f61\") " pod="calico-system/whisker-5676f44d56-vdvxw" Apr 16 01:21:06.745379 kubelet[2548]: I0416 01:21:06.745199 2548 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7dc182c6-b5fb-40af-9b7a-573725ca7063" path="/var/lib/kubelet/pods/7dc182c6-b5fb-40af-9b7a-573725ca7063/volumes" Apr 16 01:21:06.973599 containerd[1479]: time="2026-04-16T01:21:06.971568185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5676f44d56-vdvxw,Uid:9701d0e1-f269-440d-8408-b925ce2f3f61,Namespace:calico-system,Attempt:0,}" Apr 16 01:21:07.799099 systemd-networkd[1385]: cali16e22923766: Link UP Apr 16 01:21:07.808296 systemd-networkd[1385]: cali16e22923766: Gained carrier Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.147 [ERROR][4060] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.203 [INFO][4060] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5676f44d56--vdvxw-eth0 whisker-5676f44d56- calico-system 9701d0e1-f269-440d-8408-b925ce2f3f61 989 0 2026-04-16 01:21:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5676f44d56 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5676f44d56-vdvxw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali16e22923766 [] [] }} ContainerID="dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" Namespace="calico-system" Pod="whisker-5676f44d56-vdvxw" WorkloadEndpoint="localhost-k8s-whisker--5676f44d56--vdvxw-" Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.205 [INFO][4060] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" Namespace="calico-system" Pod="whisker-5676f44d56-vdvxw" WorkloadEndpoint="localhost-k8s-whisker--5676f44d56--vdvxw-eth0" Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.393 [INFO][4072] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" HandleID="k8s-pod-network.dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" Workload="localhost-k8s-whisker--5676f44d56--vdvxw-eth0" Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.464 [INFO][4072] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" HandleID="k8s-pod-network.dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" Workload="localhost-k8s-whisker--5676f44d56--vdvxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041e3f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5676f44d56-vdvxw", "timestamp":"2026-04-16 01:21:07.393041594 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000199a20)} Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.465 [INFO][4072] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.470 [INFO][4072] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.473 [INFO][4072] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.500 [INFO][4072] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" host="localhost" Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.550 [INFO][4072] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.581 [INFO][4072] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.593 [INFO][4072] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.615 [INFO][4072] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.615 [INFO][4072] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" host="localhost" Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.625 [INFO][4072] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572 Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.652 [INFO][4072] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" host="localhost" Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.686 [INFO][4072] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" host="localhost" Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.691 [INFO][4072] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" host="localhost" Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.693 [INFO][4072] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:07.950550 containerd[1479]: 2026-04-16 01:21:07.696 [INFO][4072] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" HandleID="k8s-pod-network.dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" Workload="localhost-k8s-whisker--5676f44d56--vdvxw-eth0" Apr 16 01:21:07.958075 containerd[1479]: 2026-04-16 01:21:07.717 [INFO][4060] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" Namespace="calico-system" Pod="whisker-5676f44d56-vdvxw" WorkloadEndpoint="localhost-k8s-whisker--5676f44d56--vdvxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5676f44d56--vdvxw-eth0", GenerateName:"whisker-5676f44d56-", Namespace:"calico-system", SelfLink:"", UID:"9701d0e1-f269-440d-8408-b925ce2f3f61", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 21, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5676f44d56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5676f44d56-vdvxw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali16e22923766", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:07.958075 containerd[1479]: 2026-04-16 01:21:07.718 [INFO][4060] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" Namespace="calico-system" Pod="whisker-5676f44d56-vdvxw" WorkloadEndpoint="localhost-k8s-whisker--5676f44d56--vdvxw-eth0" Apr 16 01:21:07.958075 containerd[1479]: 2026-04-16 01:21:07.719 [INFO][4060] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16e22923766 ContainerID="dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" Namespace="calico-system" Pod="whisker-5676f44d56-vdvxw" WorkloadEndpoint="localhost-k8s-whisker--5676f44d56--vdvxw-eth0" Apr 16 01:21:07.958075 containerd[1479]: 2026-04-16 01:21:07.824 [INFO][4060] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" Namespace="calico-system" Pod="whisker-5676f44d56-vdvxw" WorkloadEndpoint="localhost-k8s-whisker--5676f44d56--vdvxw-eth0" Apr 16 01:21:07.958075 containerd[1479]: 2026-04-16 01:21:07.848 [INFO][4060] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" Namespace="calico-system" Pod="whisker-5676f44d56-vdvxw" WorkloadEndpoint="localhost-k8s-whisker--5676f44d56--vdvxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5676f44d56--vdvxw-eth0", GenerateName:"whisker-5676f44d56-", Namespace:"calico-system", SelfLink:"", UID:"9701d0e1-f269-440d-8408-b925ce2f3f61", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 21, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5676f44d56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572", Pod:"whisker-5676f44d56-vdvxw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali16e22923766", MAC:"aa:14:2a:83:92:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:07.958075 containerd[1479]: 2026-04-16 01:21:07.931 [INFO][4060] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572" Namespace="calico-system" Pod="whisker-5676f44d56-vdvxw" WorkloadEndpoint="localhost-k8s-whisker--5676f44d56--vdvxw-eth0" Apr 16 01:21:08.277391 containerd[1479]: time="2026-04-16T01:21:08.276073790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:21:08.277391 containerd[1479]: time="2026-04-16T01:21:08.276312427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:21:08.277391 containerd[1479]: time="2026-04-16T01:21:08.276330930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:08.277391 containerd[1479]: time="2026-04-16T01:21:08.276652526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:08.383622 systemd[1]: run-containerd-runc-k8s.io-dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572-runc.JRRqP6.mount: Deactivated successfully. Apr 16 01:21:08.399285 systemd[1]: Started cri-containerd-dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572.scope - libcontainer container dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572. Apr 16 01:21:08.537196 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:21:08.707313 containerd[1479]: time="2026-04-16T01:21:08.706646596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5676f44d56-vdvxw,Uid:9701d0e1-f269-440d-8408-b925ce2f3f61,Namespace:calico-system,Attempt:0,} returns sandbox id \"dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572\"" Apr 16 01:21:08.769069 containerd[1479]: time="2026-04-16T01:21:08.768653363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 01:21:09.005075 kernel: calico-node[4119]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 16 01:21:09.415625 systemd-networkd[1385]: cali16e22923766: Gained IPv6LL Apr 16 01:21:10.485970 systemd-networkd[1385]: vxlan.calico: Link UP Apr 16 01:21:10.485980 systemd-networkd[1385]: vxlan.calico: Gained carrier Apr 16 01:21:11.110828 containerd[1479]: time="2026-04-16T01:21:11.110044636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:11.112090 containerd[1479]: time="2026-04-16T01:21:11.111969947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 16 01:21:11.114533 containerd[1479]: time="2026-04-16T01:21:11.114336865Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:11.118568 containerd[1479]: time="2026-04-16T01:21:11.118367771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:11.120113 containerd[1479]: time="2026-04-16T01:21:11.120023907Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.350953444s" Apr 16 01:21:11.120271 containerd[1479]: time="2026-04-16T01:21:11.120123777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 16 01:21:11.133093 containerd[1479]: time="2026-04-16T01:21:11.132644878Z" level=info msg="CreateContainer within sandbox \"dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 01:21:11.161570 containerd[1479]: time="2026-04-16T01:21:11.161476963Z" level=info msg="CreateContainer within sandbox \"dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2a34a200387a977b1c724c32d3c87cc7482bf567d8ae99c71c4a5e800ba8c4f5\"" Apr 16 01:21:11.168038 containerd[1479]: time="2026-04-16T01:21:11.167871607Z" level=info msg="StartContainer for \"2a34a200387a977b1c724c32d3c87cc7482bf567d8ae99c71c4a5e800ba8c4f5\"" Apr 16 01:21:11.264285 systemd[1]: Started cri-containerd-2a34a200387a977b1c724c32d3c87cc7482bf567d8ae99c71c4a5e800ba8c4f5.scope - libcontainer container 2a34a200387a977b1c724c32d3c87cc7482bf567d8ae99c71c4a5e800ba8c4f5. Apr 16 01:21:11.342005 containerd[1479]: time="2026-04-16T01:21:11.341461637Z" level=info msg="StartContainer for \"2a34a200387a977b1c724c32d3c87cc7482bf567d8ae99c71c4a5e800ba8c4f5\" returns successfully" Apr 16 01:21:11.345919 containerd[1479]: time="2026-04-16T01:21:11.345813597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 01:21:11.718854 systemd-networkd[1385]: vxlan.calico: Gained IPv6LL Apr 16 01:21:14.736986 containerd[1479]: time="2026-04-16T01:21:14.736873158Z" level=info msg="StopPodSandbox for \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\"" Apr 16 01:21:14.855173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486765979.mount: Deactivated successfully. Apr 16 01:21:14.938974 containerd[1479]: time="2026-04-16T01:21:14.938523926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:14.941818 containerd[1479]: time="2026-04-16T01:21:14.941029463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 16 01:21:14.943133 containerd[1479]: time="2026-04-16T01:21:14.943102768Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:14.953617 containerd[1479]: time="2026-04-16T01:21:14.953498992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:14.954911 containerd[1479]: time="2026-04-16T01:21:14.954170022Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.608283644s" Apr 16 01:21:14.954911 containerd[1479]: time="2026-04-16T01:21:14.954554072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 16 01:21:14.976092 containerd[1479]: time="2026-04-16T01:21:14.974436778Z" level=info msg="CreateContainer within sandbox \"dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 01:21:15.017449 containerd[1479]: time="2026-04-16T01:21:15.017209098Z" level=info msg="CreateContainer within sandbox \"dbb383c4c8336e3b27ca733969d85fcf904ffe73e4417691ecd64bb255df2572\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"adb8b6f4e82fb2bfd9a0dd832f128ae2c3203a1d5caaa1cdf4cfa7dd41b7b35b\"" Apr 16 01:21:15.021879 containerd[1479]: time="2026-04-16T01:21:15.019318625Z" level=info msg="StartContainer for \"adb8b6f4e82fb2bfd9a0dd832f128ae2c3203a1d5caaa1cdf4cfa7dd41b7b35b\"" Apr 16 01:21:15.110251 containerd[1479]: 2026-04-16 01:21:14.994 [INFO][4422] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Apr 16 01:21:15.110251 containerd[1479]: 2026-04-16 01:21:14.995 [INFO][4422] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" iface="eth0" netns="/var/run/netns/cni-9ed28dfc-c36f-5362-e01f-b93db1d5f0e0" Apr 16 01:21:15.110251 containerd[1479]: 2026-04-16 01:21:14.996 [INFO][4422] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" iface="eth0" netns="/var/run/netns/cni-9ed28dfc-c36f-5362-e01f-b93db1d5f0e0" Apr 16 01:21:15.110251 containerd[1479]: 2026-04-16 01:21:14.997 [INFO][4422] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" iface="eth0" netns="/var/run/netns/cni-9ed28dfc-c36f-5362-e01f-b93db1d5f0e0" Apr 16 01:21:15.110251 containerd[1479]: 2026-04-16 01:21:14.998 [INFO][4422] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Apr 16 01:21:15.110251 containerd[1479]: 2026-04-16 01:21:14.998 [INFO][4422] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Apr 16 01:21:15.110251 containerd[1479]: 2026-04-16 01:21:15.066 [INFO][4436] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" HandleID="k8s-pod-network.dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Workload="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:21:15.110251 containerd[1479]: 2026-04-16 01:21:15.067 [INFO][4436] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:15.110251 containerd[1479]: 2026-04-16 01:21:15.067 [INFO][4436] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:15.110251 containerd[1479]: 2026-04-16 01:21:15.093 [WARNING][4436] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" HandleID="k8s-pod-network.dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Workload="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:21:15.110251 containerd[1479]: 2026-04-16 01:21:15.093 [INFO][4436] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" HandleID="k8s-pod-network.dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Workload="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:21:15.110251 containerd[1479]: 2026-04-16 01:21:15.100 [INFO][4436] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:15.110251 containerd[1479]: 2026-04-16 01:21:15.105 [INFO][4422] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Apr 16 01:21:15.114462 containerd[1479]: time="2026-04-16T01:21:15.114302765Z" level=info msg="TearDown network for sandbox \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\" successfully" Apr 16 01:21:15.114462 containerd[1479]: time="2026-04-16T01:21:15.114472421Z" level=info msg="StopPodSandbox for \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\" returns successfully" Apr 16 01:21:15.116282 systemd[1]: run-netns-cni\x2d9ed28dfc\x2dc36f\x2d5362\x2de01f\x2db93db1d5f0e0.mount: Deactivated successfully. Apr 16 01:21:15.130808 containerd[1479]: time="2026-04-16T01:21:15.129601323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8584db774c-nfctt,Uid:794b6563-cebd-4102-a96c-e20a75901f97,Namespace:calico-system,Attempt:1,}" Apr 16 01:21:15.139401 systemd[1]: run-containerd-runc-k8s.io-adb8b6f4e82fb2bfd9a0dd832f128ae2c3203a1d5caaa1cdf4cfa7dd41b7b35b-runc.EuU62d.mount: Deactivated successfully. Apr 16 01:21:15.150621 systemd[1]: Started cri-containerd-adb8b6f4e82fb2bfd9a0dd832f128ae2c3203a1d5caaa1cdf4cfa7dd41b7b35b.scope - libcontainer container adb8b6f4e82fb2bfd9a0dd832f128ae2c3203a1d5caaa1cdf4cfa7dd41b7b35b. Apr 16 01:21:15.319934 containerd[1479]: time="2026-04-16T01:21:15.319569313Z" level=info msg="StartContainer for \"adb8b6f4e82fb2bfd9a0dd832f128ae2c3203a1d5caaa1cdf4cfa7dd41b7b35b\" returns successfully" Apr 16 01:21:15.520502 systemd-networkd[1385]: calie67cbe68a78: Link UP Apr 16 01:21:15.521940 systemd-networkd[1385]: calie67cbe68a78: Gained carrier Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.326 [INFO][4465] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0 calico-apiserver-8584db774c- calico-system 794b6563-cebd-4102-a96c-e20a75901f97 1018 0 2026-04-16 01:20:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8584db774c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8584db774c-nfctt eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie67cbe68a78 [] [] }} ContainerID="94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" Namespace="calico-system" Pod="calico-apiserver-8584db774c-nfctt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--nfctt-" Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.327 [INFO][4465] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" Namespace="calico-system" Pod="calico-apiserver-8584db774c-nfctt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.402 [INFO][4493] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" HandleID="k8s-pod-network.94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" Workload="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.417 [INFO][4493] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" HandleID="k8s-pod-network.94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" Workload="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-8584db774c-nfctt", "timestamp":"2026-04-16 01:21:15.402482244 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005ca6e0)} Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.417 [INFO][4493] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.418 [INFO][4493] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.418 [INFO][4493] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.427 [INFO][4493] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" host="localhost" Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.445 [INFO][4493] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.460 [INFO][4493] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.467 [INFO][4493] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.473 [INFO][4493] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.473 [INFO][4493] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" host="localhost" Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.478 [INFO][4493] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.492 [INFO][4493] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" host="localhost" Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.507 [INFO][4493] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" host="localhost" Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.508 [INFO][4493] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" host="localhost" Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.508 [INFO][4493] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:15.560259 containerd[1479]: 2026-04-16 01:21:15.508 [INFO][4493] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" HandleID="k8s-pod-network.94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" Workload="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:21:15.561153 containerd[1479]: 2026-04-16 01:21:15.514 [INFO][4465] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" Namespace="calico-system" Pod="calico-apiserver-8584db774c-nfctt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0", GenerateName:"calico-apiserver-8584db774c-", Namespace:"calico-system", SelfLink:"", UID:"794b6563-cebd-4102-a96c-e20a75901f97", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8584db774c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8584db774c-nfctt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie67cbe68a78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:15.561153 containerd[1479]: 2026-04-16 01:21:15.515 [INFO][4465] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" Namespace="calico-system" Pod="calico-apiserver-8584db774c-nfctt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:21:15.561153 containerd[1479]: 2026-04-16 01:21:15.515 [INFO][4465] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie67cbe68a78 ContainerID="94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" Namespace="calico-system" Pod="calico-apiserver-8584db774c-nfctt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:21:15.561153 containerd[1479]: 2026-04-16 01:21:15.520 [INFO][4465] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" Namespace="calico-system" Pod="calico-apiserver-8584db774c-nfctt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:21:15.561153 containerd[1479]: 2026-04-16 01:21:15.522 [INFO][4465] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" Namespace="calico-system" Pod="calico-apiserver-8584db774c-nfctt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0", GenerateName:"calico-apiserver-8584db774c-", Namespace:"calico-system", SelfLink:"", UID:"794b6563-cebd-4102-a96c-e20a75901f97", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8584db774c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e", Pod:"calico-apiserver-8584db774c-nfctt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie67cbe68a78", MAC:"a6:0c:d7:d4:39:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:15.561153 containerd[1479]: 2026-04-16 01:21:15.553 [INFO][4465] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e" Namespace="calico-system" Pod="calico-apiserver-8584db774c-nfctt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:21:15.651231 containerd[1479]: time="2026-04-16T01:21:15.650918270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:21:15.651231 containerd[1479]: time="2026-04-16T01:21:15.651011148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:21:15.651231 containerd[1479]: time="2026-04-16T01:21:15.651025087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:15.654639 containerd[1479]: time="2026-04-16T01:21:15.653634620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:15.726106 systemd[1]: Started cri-containerd-94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e.scope - libcontainer container 94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e. Apr 16 01:21:15.732975 containerd[1479]: time="2026-04-16T01:21:15.729173825Z" level=info msg="StopPodSandbox for \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\"" Apr 16 01:21:15.856576 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:21:15.959554 containerd[1479]: time="2026-04-16T01:21:15.955296500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8584db774c-nfctt,Uid:794b6563-cebd-4102-a96c-e20a75901f97,Namespace:calico-system,Attempt:1,} returns sandbox id \"94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e\"" Apr 16 01:21:15.980910 containerd[1479]: time="2026-04-16T01:21:15.980828487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 01:21:16.101911 containerd[1479]: 2026-04-16 01:21:15.973 [INFO][4570] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Apr 16 01:21:16.101911 containerd[1479]: 2026-04-16 01:21:15.973 [INFO][4570] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" iface="eth0" netns="/var/run/netns/cni-2ca1a5b7-cf3a-56b7-f656-9da04a7fe01b" Apr 16 01:21:16.101911 containerd[1479]: 2026-04-16 01:21:15.975 [INFO][4570] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" iface="eth0" netns="/var/run/netns/cni-2ca1a5b7-cf3a-56b7-f656-9da04a7fe01b" Apr 16 01:21:16.101911 containerd[1479]: 2026-04-16 01:21:15.975 [INFO][4570] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" iface="eth0" netns="/var/run/netns/cni-2ca1a5b7-cf3a-56b7-f656-9da04a7fe01b" Apr 16 01:21:16.101911 containerd[1479]: 2026-04-16 01:21:15.978 [INFO][4570] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Apr 16 01:21:16.101911 containerd[1479]: 2026-04-16 01:21:15.979 [INFO][4570] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Apr 16 01:21:16.101911 containerd[1479]: 2026-04-16 01:21:16.068 [INFO][4585] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" HandleID="k8s-pod-network.21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Workload="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:21:16.101911 containerd[1479]: 2026-04-16 01:21:16.069 [INFO][4585] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:16.101911 containerd[1479]: 2026-04-16 01:21:16.069 [INFO][4585] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:16.101911 containerd[1479]: 2026-04-16 01:21:16.083 [WARNING][4585] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" HandleID="k8s-pod-network.21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Workload="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:21:16.101911 containerd[1479]: 2026-04-16 01:21:16.083 [INFO][4585] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" HandleID="k8s-pod-network.21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Workload="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:21:16.101911 containerd[1479]: 2026-04-16 01:21:16.094 [INFO][4585] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:16.101911 containerd[1479]: 2026-04-16 01:21:16.098 [INFO][4570] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Apr 16 01:21:16.108953 containerd[1479]: time="2026-04-16T01:21:16.102480059Z" level=info msg="TearDown network for sandbox \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\" successfully" Apr 16 01:21:16.108953 containerd[1479]: time="2026-04-16T01:21:16.102517929Z" level=info msg="StopPodSandbox for \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\" returns successfully" Apr 16 01:21:16.115638 containerd[1479]: time="2026-04-16T01:21:16.115580379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2mtc,Uid:36e0b1c3-35d3-4f7d-a631-c6ac0e723311,Namespace:calico-system,Attempt:1,}" Apr 16 01:21:16.119653 systemd[1]: run-netns-cni\x2d2ca1a5b7\x2dcf3a\x2d56b7\x2df656\x2d9da04a7fe01b.mount: Deactivated successfully. Apr 16 01:21:16.365204 kubelet[2548]: I0416 01:21:16.364898 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-5676f44d56-vdvxw" podStartSLOduration=4.159544857 podStartE2EDuration="10.364861808s" podCreationTimestamp="2026-04-16 01:21:06 +0000 UTC" firstStartedPulling="2026-04-16 01:21:08.750348766 +0000 UTC m=+70.442483055" lastFinishedPulling="2026-04-16 01:21:14.955665724 +0000 UTC m=+76.647800006" observedRunningTime="2026-04-16 01:21:16.364612599 +0000 UTC m=+78.056746886" watchObservedRunningTime="2026-04-16 01:21:16.364861808 +0000 UTC m=+78.056996098" Apr 16 01:21:16.737900 containerd[1479]: time="2026-04-16T01:21:16.735170730Z" level=info msg="StopPodSandbox for \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\"" Apr 16 01:21:16.760857 systemd-networkd[1385]: cali0d98d68a14f: Link UP Apr 16 01:21:16.762529 systemd-networkd[1385]: cali0d98d68a14f: Gained carrier Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.392 [INFO][4596] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--b2mtc-eth0 csi-node-driver- calico-system 36e0b1c3-35d3-4f7d-a631-c6ac0e723311 1025 0 2026-04-16 01:20:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-b2mtc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0d98d68a14f [] [] }} ContainerID="efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" Namespace="calico-system" Pod="csi-node-driver-b2mtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2mtc-" Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.394 [INFO][4596] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" Namespace="calico-system" Pod="csi-node-driver-b2mtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.549 [INFO][4618] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" HandleID="k8s-pod-network.efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" Workload="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.569 [INFO][4618] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" HandleID="k8s-pod-network.efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" Workload="localhost-k8s-csi--node--driver--b2mtc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d2200), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-b2mtc", "timestamp":"2026-04-16 01:21:16.549564278 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00050e000)} Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.570 [INFO][4618] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.570 [INFO][4618] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.570 [INFO][4618] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.580 [INFO][4618] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" host="localhost" Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.603 [INFO][4618] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.639 [INFO][4618] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.663 [INFO][4618] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.678 [INFO][4618] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.678 [INFO][4618] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" host="localhost" Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.689 [INFO][4618] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942 Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.710 [INFO][4618] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" host="localhost" Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.730 [INFO][4618] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" host="localhost" Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.733 [INFO][4618] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" host="localhost" Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.736 [INFO][4618] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:16.812635 containerd[1479]: 2026-04-16 01:21:16.736 [INFO][4618] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" HandleID="k8s-pod-network.efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" Workload="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:21:16.817038 containerd[1479]: 2026-04-16 01:21:16.741 [INFO][4596] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" Namespace="calico-system" Pod="csi-node-driver-b2mtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2mtc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b2mtc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36e0b1c3-35d3-4f7d-a631-c6ac0e723311", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-b2mtc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0d98d68a14f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:16.817038 containerd[1479]: 2026-04-16 01:21:16.742 [INFO][4596] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" Namespace="calico-system" Pod="csi-node-driver-b2mtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:21:16.817038 containerd[1479]: 2026-04-16 01:21:16.742 [INFO][4596] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d98d68a14f ContainerID="efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" Namespace="calico-system" Pod="csi-node-driver-b2mtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:21:16.817038 containerd[1479]: 2026-04-16 01:21:16.760 [INFO][4596] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" Namespace="calico-system" Pod="csi-node-driver-b2mtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:21:16.817038 containerd[1479]: 2026-04-16 01:21:16.764 [INFO][4596] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" Namespace="calico-system" Pod="csi-node-driver-b2mtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2mtc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b2mtc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36e0b1c3-35d3-4f7d-a631-c6ac0e723311", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942", Pod:"csi-node-driver-b2mtc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0d98d68a14f", MAC:"02:ba:89:be:36:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:16.817038 containerd[1479]: 2026-04-16 01:21:16.808 [INFO][4596] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942" Namespace="calico-system" Pod="csi-node-driver-b2mtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:21:16.926987 containerd[1479]: time="2026-04-16T01:21:16.924247192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:21:16.926987 containerd[1479]: time="2026-04-16T01:21:16.924459614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:21:16.926987 containerd[1479]: time="2026-04-16T01:21:16.924475228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:16.926987 containerd[1479]: time="2026-04-16T01:21:16.924572779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:17.024177 systemd[1]: Started cri-containerd-efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942.scope - libcontainer container efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942. Apr 16 01:21:17.046526 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:21:17.088847 containerd[1479]: 2026-04-16 01:21:16.920 [INFO][4639] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Apr 16 01:21:17.088847 containerd[1479]: 2026-04-16 01:21:16.921 [INFO][4639] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" iface="eth0" netns="/var/run/netns/cni-9f6afa25-a975-d878-9149-b57c723a6800" Apr 16 01:21:17.088847 containerd[1479]: 2026-04-16 01:21:16.926 [INFO][4639] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" iface="eth0" netns="/var/run/netns/cni-9f6afa25-a975-d878-9149-b57c723a6800" Apr 16 01:21:17.088847 containerd[1479]: 2026-04-16 01:21:16.928 [INFO][4639] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" iface="eth0" netns="/var/run/netns/cni-9f6afa25-a975-d878-9149-b57c723a6800" Apr 16 01:21:17.088847 containerd[1479]: 2026-04-16 01:21:16.930 [INFO][4639] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Apr 16 01:21:17.088847 containerd[1479]: 2026-04-16 01:21:16.931 [INFO][4639] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Apr 16 01:21:17.088847 containerd[1479]: 2026-04-16 01:21:17.056 [INFO][4679] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" HandleID="k8s-pod-network.a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Workload="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:21:17.088847 containerd[1479]: 2026-04-16 01:21:17.057 [INFO][4679] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:17.088847 containerd[1479]: 2026-04-16 01:21:17.057 [INFO][4679] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:17.088847 containerd[1479]: 2026-04-16 01:21:17.074 [WARNING][4679] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" HandleID="k8s-pod-network.a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Workload="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:21:17.088847 containerd[1479]: 2026-04-16 01:21:17.074 [INFO][4679] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" HandleID="k8s-pod-network.a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Workload="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:21:17.088847 containerd[1479]: 2026-04-16 01:21:17.079 [INFO][4679] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:17.088847 containerd[1479]: 2026-04-16 01:21:17.083 [INFO][4639] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Apr 16 01:21:17.090469 containerd[1479]: time="2026-04-16T01:21:17.090144953Z" level=info msg="TearDown network for sandbox \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\" successfully" Apr 16 01:21:17.090469 containerd[1479]: time="2026-04-16T01:21:17.090238858Z" level=info msg="StopPodSandbox for \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\" returns successfully" Apr 16 01:21:17.095002 systemd-networkd[1385]: calie67cbe68a78: Gained IPv6LL Apr 16 01:21:17.102492 systemd[1]: run-netns-cni\x2d9f6afa25\x2da975\x2dd878\x2d9149\x2db57c723a6800.mount: Deactivated successfully. Apr 16 01:21:17.110180 containerd[1479]: time="2026-04-16T01:21:17.110130027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f84cb8bf-5s4sf,Uid:822d1a89-65ea-4c5d-be46-0a8d4c12be69,Namespace:calico-system,Attempt:1,}" Apr 16 01:21:17.111603 containerd[1479]: time="2026-04-16T01:21:17.111582679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b2mtc,Uid:36e0b1c3-35d3-4f7d-a631-c6ac0e723311,Namespace:calico-system,Attempt:1,} returns sandbox id \"efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942\"" Apr 16 01:21:17.604086 systemd-networkd[1385]: caliefc81431c55: Link UP Apr 16 01:21:17.605445 systemd-networkd[1385]: caliefc81431c55: Gained carrier Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.253 [INFO][4717] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0 calico-kube-controllers-67f84cb8bf- calico-system 822d1a89-65ea-4c5d-be46-0a8d4c12be69 1039 0 2026-04-16 01:20:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67f84cb8bf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67f84cb8bf-5s4sf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliefc81431c55 [] [] }} ContainerID="96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" Namespace="calico-system" Pod="calico-kube-controllers-67f84cb8bf-5s4sf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-" Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.254 [INFO][4717] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" Namespace="calico-system" Pod="calico-kube-controllers-67f84cb8bf-5s4sf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.464 [INFO][4732] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" HandleID="k8s-pod-network.96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" Workload="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.487 [INFO][4732] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" HandleID="k8s-pod-network.96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" Workload="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ec60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67f84cb8bf-5s4sf", "timestamp":"2026-04-16 01:21:17.464646808 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000186f20)} Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.487 [INFO][4732] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.488 [INFO][4732] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.488 [INFO][4732] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.496 [INFO][4732] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" host="localhost" Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.522 [INFO][4732] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.538 [INFO][4732] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.546 [INFO][4732] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.552 [INFO][4732] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.552 [INFO][4732] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" host="localhost" Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.560 [INFO][4732] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.567 [INFO][4732] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" host="localhost" Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.582 [INFO][4732] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" host="localhost" Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.583 [INFO][4732] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" host="localhost" Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.583 [INFO][4732] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:17.648905 containerd[1479]: 2026-04-16 01:21:17.583 [INFO][4732] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" HandleID="k8s-pod-network.96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" Workload="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:21:17.653841 containerd[1479]: 2026-04-16 01:21:17.591 [INFO][4717] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" Namespace="calico-system" Pod="calico-kube-controllers-67f84cb8bf-5s4sf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0", GenerateName:"calico-kube-controllers-67f84cb8bf-", Namespace:"calico-system", SelfLink:"", UID:"822d1a89-65ea-4c5d-be46-0a8d4c12be69", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f84cb8bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67f84cb8bf-5s4sf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefc81431c55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:17.653841 containerd[1479]: 2026-04-16 01:21:17.592 [INFO][4717] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" Namespace="calico-system" Pod="calico-kube-controllers-67f84cb8bf-5s4sf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:21:17.653841 containerd[1479]: 2026-04-16 01:21:17.592 [INFO][4717] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefc81431c55 ContainerID="96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" Namespace="calico-system" Pod="calico-kube-controllers-67f84cb8bf-5s4sf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:21:17.653841 containerd[1479]: 2026-04-16 01:21:17.604 [INFO][4717] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" Namespace="calico-system" Pod="calico-kube-controllers-67f84cb8bf-5s4sf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:21:17.653841 containerd[1479]: 2026-04-16 01:21:17.605 [INFO][4717] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" Namespace="calico-system" Pod="calico-kube-controllers-67f84cb8bf-5s4sf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0", GenerateName:"calico-kube-controllers-67f84cb8bf-", Namespace:"calico-system", SelfLink:"", UID:"822d1a89-65ea-4c5d-be46-0a8d4c12be69", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f84cb8bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c", Pod:"calico-kube-controllers-67f84cb8bf-5s4sf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefc81431c55", MAC:"5e:67:71:cc:7c:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:17.653841 containerd[1479]: 2026-04-16 01:21:17.642 [INFO][4717] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c" Namespace="calico-system" Pod="calico-kube-controllers-67f84cb8bf-5s4sf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:21:17.754458 containerd[1479]: time="2026-04-16T01:21:17.751210578Z" level=info msg="StopPodSandbox for \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\"" Apr 16 01:21:17.814434 containerd[1479]: time="2026-04-16T01:21:17.811658505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:21:17.814434 containerd[1479]: time="2026-04-16T01:21:17.811863687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:21:17.814434 containerd[1479]: time="2026-04-16T01:21:17.811882239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:17.814434 containerd[1479]: time="2026-04-16T01:21:17.813467791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:17.906247 systemd[1]: Started cri-containerd-96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c.scope - libcontainer container 96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c. Apr 16 01:21:18.003607 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:21:18.027149 systemd[1]: run-containerd-runc-k8s.io-96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c-runc.fh4Lsh.mount: Deactivated successfully. Apr 16 01:21:18.175208 containerd[1479]: time="2026-04-16T01:21:18.172919242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f84cb8bf-5s4sf,Uid:822d1a89-65ea-4c5d-be46-0a8d4c12be69,Namespace:calico-system,Attempt:1,} returns sandbox id \"96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c\"" Apr 16 01:21:18.336249 containerd[1479]: 2026-04-16 01:21:18.091 [INFO][4779] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Apr 16 01:21:18.336249 containerd[1479]: 2026-04-16 01:21:18.094 [INFO][4779] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" iface="eth0" netns="/var/run/netns/cni-3e20afec-6a26-b6cf-df73-63cc81b5b821" Apr 16 01:21:18.336249 containerd[1479]: 2026-04-16 01:21:18.098 [INFO][4779] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" iface="eth0" netns="/var/run/netns/cni-3e20afec-6a26-b6cf-df73-63cc81b5b821" Apr 16 01:21:18.336249 containerd[1479]: 2026-04-16 01:21:18.115 [INFO][4779] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" iface="eth0" netns="/var/run/netns/cni-3e20afec-6a26-b6cf-df73-63cc81b5b821" Apr 16 01:21:18.336249 containerd[1479]: 2026-04-16 01:21:18.116 [INFO][4779] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Apr 16 01:21:18.336249 containerd[1479]: 2026-04-16 01:21:18.116 [INFO][4779] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Apr 16 01:21:18.336249 containerd[1479]: 2026-04-16 01:21:18.238 [INFO][4812] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" HandleID="k8s-pod-network.111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Workload="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:21:18.336249 containerd[1479]: 2026-04-16 01:21:18.240 [INFO][4812] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:18.336249 containerd[1479]: 2026-04-16 01:21:18.241 [INFO][4812] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:18.336249 containerd[1479]: 2026-04-16 01:21:18.308 [WARNING][4812] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" HandleID="k8s-pod-network.111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Workload="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:21:18.336249 containerd[1479]: 2026-04-16 01:21:18.308 [INFO][4812] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" HandleID="k8s-pod-network.111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Workload="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:21:18.336249 containerd[1479]: 2026-04-16 01:21:18.320 [INFO][4812] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:18.336249 containerd[1479]: 2026-04-16 01:21:18.326 [INFO][4779] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Apr 16 01:21:18.341155 containerd[1479]: time="2026-04-16T01:21:18.337227398Z" level=info msg="TearDown network for sandbox \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\" successfully" Apr 16 01:21:18.341155 containerd[1479]: time="2026-04-16T01:21:18.337251093Z" level=info msg="StopPodSandbox for \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\" returns successfully" Apr 16 01:21:18.343057 systemd[1]: run-netns-cni\x2d3e20afec\x2d6a26\x2db6cf\x2ddf73\x2d63cc81b5b821.mount: Deactivated successfully. Apr 16 01:21:18.347441 containerd[1479]: time="2026-04-16T01:21:18.346926569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8584db774c-7qbkw,Uid:b5113220-c80b-4e1b-afea-3fb7f3d652bc,Namespace:calico-system,Attempt:1,}" Apr 16 01:21:18.505546 systemd-networkd[1385]: cali0d98d68a14f: Gained IPv6LL Apr 16 01:21:18.739495 kubelet[2548]: E0416 01:21:18.738810 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:19.017479 systemd-networkd[1385]: cali727e0980222: Link UP Apr 16 01:21:19.018218 systemd-networkd[1385]: cali727e0980222: Gained carrier Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.616 [INFO][4826] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0 calico-apiserver-8584db774c- calico-system b5113220-c80b-4e1b-afea-3fb7f3d652bc 1047 0 2026-04-16 01:20:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8584db774c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8584db774c-7qbkw eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali727e0980222 [] [] }} ContainerID="bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" Namespace="calico-system" Pod="calico-apiserver-8584db774c-7qbkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--7qbkw-" Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.616 [INFO][4826] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" Namespace="calico-system" Pod="calico-apiserver-8584db774c-7qbkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.858 [INFO][4843] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" HandleID="k8s-pod-network.bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" Workload="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.873 [INFO][4843] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" HandleID="k8s-pod-network.bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" Workload="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000378200), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-8584db774c-7qbkw", "timestamp":"2026-04-16 01:21:18.858012386 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00021a000)} Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.876 [INFO][4843] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.876 [INFO][4843] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.876 [INFO][4843] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.888 [INFO][4843] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" host="localhost" Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.910 [INFO][4843] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.930 [INFO][4843] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.940 [INFO][4843] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.950 [INFO][4843] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.950 [INFO][4843] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" host="localhost" Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.961 [INFO][4843] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.974 [INFO][4843] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" host="localhost" Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.993 [INFO][4843] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" host="localhost" Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.993 [INFO][4843] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" host="localhost" Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.993 [INFO][4843] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:19.063531 containerd[1479]: 2026-04-16 01:21:18.993 [INFO][4843] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" HandleID="k8s-pod-network.bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" Workload="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:21:19.068957 containerd[1479]: 2026-04-16 01:21:19.001 [INFO][4826] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" Namespace="calico-system" Pod="calico-apiserver-8584db774c-7qbkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0", GenerateName:"calico-apiserver-8584db774c-", Namespace:"calico-system", SelfLink:"", UID:"b5113220-c80b-4e1b-afea-3fb7f3d652bc", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8584db774c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8584db774c-7qbkw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali727e0980222", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:19.068957 containerd[1479]: 2026-04-16 01:21:19.006 [INFO][4826] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" Namespace="calico-system" Pod="calico-apiserver-8584db774c-7qbkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:21:19.068957 containerd[1479]: 2026-04-16 01:21:19.006 [INFO][4826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali727e0980222 ContainerID="bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" Namespace="calico-system" Pod="calico-apiserver-8584db774c-7qbkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:21:19.068957 containerd[1479]: 2026-04-16 01:21:19.015 [INFO][4826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" Namespace="calico-system" Pod="calico-apiserver-8584db774c-7qbkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:21:19.068957 containerd[1479]: 2026-04-16 01:21:19.020 [INFO][4826] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" Namespace="calico-system" Pod="calico-apiserver-8584db774c-7qbkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0", GenerateName:"calico-apiserver-8584db774c-", Namespace:"calico-system", SelfLink:"", UID:"b5113220-c80b-4e1b-afea-3fb7f3d652bc", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8584db774c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b", Pod:"calico-apiserver-8584db774c-7qbkw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali727e0980222", MAC:"aa:7e:d4:b0:34:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:19.068957 containerd[1479]: 2026-04-16 01:21:19.058 [INFO][4826] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b" Namespace="calico-system" Pod="calico-apiserver-8584db774c-7qbkw" WorkloadEndpoint="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:21:19.139855 containerd[1479]: time="2026-04-16T01:21:19.138452347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:21:19.139855 containerd[1479]: time="2026-04-16T01:21:19.138532435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:21:19.139855 containerd[1479]: time="2026-04-16T01:21:19.138544605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:19.139855 containerd[1479]: time="2026-04-16T01:21:19.138620300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:19.203934 systemd[1]: Started cri-containerd-bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b.scope - libcontainer container bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b. Apr 16 01:21:19.238819 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:21:19.304412 containerd[1479]: time="2026-04-16T01:21:19.302465350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8584db774c-7qbkw,Uid:b5113220-c80b-4e1b-afea-3fb7f3d652bc,Namespace:calico-system,Attempt:1,} returns sandbox id \"bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b\"" Apr 16 01:21:19.398522 systemd-networkd[1385]: caliefc81431c55: Gained IPv6LL Apr 16 01:21:19.733821 containerd[1479]: time="2026-04-16T01:21:19.729130671Z" level=info msg="StopPodSandbox for \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\"" Apr 16 01:21:19.733821 containerd[1479]: time="2026-04-16T01:21:19.729438224Z" level=info msg="StopPodSandbox for \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\"" Apr 16 01:21:19.733821 containerd[1479]: time="2026-04-16T01:21:19.729931482Z" level=info msg="StopPodSandbox for \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\"" Apr 16 01:21:20.061560 containerd[1479]: 2026-04-16 01:21:19.943 [INFO][4950] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Apr 16 01:21:20.061560 containerd[1479]: 2026-04-16 01:21:19.948 [INFO][4950] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" iface="eth0" netns="/var/run/netns/cni-d7624b7f-aed4-1a41-74a5-8658f2bccca5" Apr 16 01:21:20.061560 containerd[1479]: 2026-04-16 01:21:19.950 [INFO][4950] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" iface="eth0" netns="/var/run/netns/cni-d7624b7f-aed4-1a41-74a5-8658f2bccca5" Apr 16 01:21:20.061560 containerd[1479]: 2026-04-16 01:21:19.952 [INFO][4950] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" iface="eth0" netns="/var/run/netns/cni-d7624b7f-aed4-1a41-74a5-8658f2bccca5" Apr 16 01:21:20.061560 containerd[1479]: 2026-04-16 01:21:19.953 [INFO][4950] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Apr 16 01:21:20.061560 containerd[1479]: 2026-04-16 01:21:19.953 [INFO][4950] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Apr 16 01:21:20.061560 containerd[1479]: 2026-04-16 01:21:20.018 [INFO][4983] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" HandleID="k8s-pod-network.464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Workload="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:21:20.061560 containerd[1479]: 2026-04-16 01:21:20.022 [INFO][4983] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:20.061560 containerd[1479]: 2026-04-16 01:21:20.022 [INFO][4983] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:20.061560 containerd[1479]: 2026-04-16 01:21:20.035 [WARNING][4983] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" HandleID="k8s-pod-network.464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Workload="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:21:20.061560 containerd[1479]: 2026-04-16 01:21:20.036 [INFO][4983] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" HandleID="k8s-pod-network.464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Workload="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:21:20.061560 containerd[1479]: 2026-04-16 01:21:20.038 [INFO][4983] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:20.061560 containerd[1479]: 2026-04-16 01:21:20.041 [INFO][4950] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Apr 16 01:21:20.064190 containerd[1479]: time="2026-04-16T01:21:20.062186028Z" level=info msg="TearDown network for sandbox \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\" successfully" Apr 16 01:21:20.064190 containerd[1479]: time="2026-04-16T01:21:20.062219381Z" level=info msg="StopPodSandbox for \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\" returns successfully" Apr 16 01:21:20.064263 systemd[1]: run-netns-cni\x2dd7624b7f\x2daed4\x2d1a41\x2d74a5\x2d8658f2bccca5.mount: Deactivated successfully. Apr 16 01:21:20.072211 containerd[1479]: 2026-04-16 01:21:19.937 [INFO][4949] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Apr 16 01:21:20.072211 containerd[1479]: 2026-04-16 01:21:19.938 [INFO][4949] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" iface="eth0" netns="/var/run/netns/cni-8fb26cd8-ea77-9b39-7faa-cc38beb4270c" Apr 16 01:21:20.072211 containerd[1479]: 2026-04-16 01:21:19.939 [INFO][4949] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" iface="eth0" netns="/var/run/netns/cni-8fb26cd8-ea77-9b39-7faa-cc38beb4270c" Apr 16 01:21:20.072211 containerd[1479]: 2026-04-16 01:21:19.939 [INFO][4949] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" iface="eth0" netns="/var/run/netns/cni-8fb26cd8-ea77-9b39-7faa-cc38beb4270c" Apr 16 01:21:20.072211 containerd[1479]: 2026-04-16 01:21:19.939 [INFO][4949] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Apr 16 01:21:20.072211 containerd[1479]: 2026-04-16 01:21:19.939 [INFO][4949] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Apr 16 01:21:20.072211 containerd[1479]: 2026-04-16 01:21:20.026 [INFO][4972] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" HandleID="k8s-pod-network.bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Workload="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:21:20.072211 containerd[1479]: 2026-04-16 01:21:20.027 [INFO][4972] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:20.072211 containerd[1479]: 2026-04-16 01:21:20.039 [INFO][4972] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:20.072211 containerd[1479]: 2026-04-16 01:21:20.049 [WARNING][4972] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" HandleID="k8s-pod-network.bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Workload="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:21:20.072211 containerd[1479]: 2026-04-16 01:21:20.049 [INFO][4972] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" HandleID="k8s-pod-network.bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Workload="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:21:20.072211 containerd[1479]: 2026-04-16 01:21:20.052 [INFO][4972] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:20.072211 containerd[1479]: 2026-04-16 01:21:20.054 [INFO][4949] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Apr 16 01:21:20.076021 kubelet[2548]: E0416 01:21:20.073395 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:20.077823 containerd[1479]: time="2026-04-16T01:21:20.074426952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kv5bx,Uid:925d015b-3f96-4cb8-a9c1-64b9f1a67c52,Namespace:kube-system,Attempt:1,}" Apr 16 01:21:20.079822 containerd[1479]: time="2026-04-16T01:21:20.079026741Z" level=info msg="TearDown network for sandbox \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\" successfully" Apr 16 01:21:20.080119 containerd[1479]: time="2026-04-16T01:21:20.079969537Z" level=info msg="StopPodSandbox for \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\" returns successfully" Apr 16 01:21:20.081098 systemd[1]: run-netns-cni\x2d8fb26cd8\x2dea77\x2d9b39\x2d7faa\x2dcc38beb4270c.mount: Deactivated successfully. Apr 16 01:21:20.086208 kubelet[2548]: E0416 01:21:20.085915 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:20.088144 containerd[1479]: time="2026-04-16T01:21:20.088013286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ss6nt,Uid:9471422e-1a11-4a68-a210-42dc7d4df58a,Namespace:kube-system,Attempt:1,}" Apr 16 01:21:20.100101 containerd[1479]: 2026-04-16 01:21:19.934 [INFO][4947] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Apr 16 01:21:20.100101 containerd[1479]: 2026-04-16 01:21:19.935 [INFO][4947] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" iface="eth0" netns="/var/run/netns/cni-4b6aa26e-32a2-0331-215b-5801e1ad5716" Apr 16 01:21:20.100101 containerd[1479]: 2026-04-16 01:21:19.936 [INFO][4947] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" iface="eth0" netns="/var/run/netns/cni-4b6aa26e-32a2-0331-215b-5801e1ad5716" Apr 16 01:21:20.100101 containerd[1479]: 2026-04-16 01:21:19.937 [INFO][4947] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" iface="eth0" netns="/var/run/netns/cni-4b6aa26e-32a2-0331-215b-5801e1ad5716" Apr 16 01:21:20.100101 containerd[1479]: 2026-04-16 01:21:19.937 [INFO][4947] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Apr 16 01:21:20.100101 containerd[1479]: 2026-04-16 01:21:19.937 [INFO][4947] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Apr 16 01:21:20.100101 containerd[1479]: 2026-04-16 01:21:20.050 [INFO][4973] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" HandleID="k8s-pod-network.d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Workload="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:21:20.100101 containerd[1479]: 2026-04-16 01:21:20.050 [INFO][4973] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:20.100101 containerd[1479]: 2026-04-16 01:21:20.052 [INFO][4973] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:20.100101 containerd[1479]: 2026-04-16 01:21:20.065 [WARNING][4973] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" HandleID="k8s-pod-network.d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Workload="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:21:20.100101 containerd[1479]: 2026-04-16 01:21:20.067 [INFO][4973] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" HandleID="k8s-pod-network.d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Workload="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:21:20.100101 containerd[1479]: 2026-04-16 01:21:20.083 [INFO][4973] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:20.100101 containerd[1479]: 2026-04-16 01:21:20.093 [INFO][4947] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Apr 16 01:21:20.100871 containerd[1479]: time="2026-04-16T01:21:20.100503303Z" level=info msg="TearDown network for sandbox \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\" successfully" Apr 16 01:21:20.100871 containerd[1479]: time="2026-04-16T01:21:20.100541550Z" level=info msg="StopPodSandbox for \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\" returns successfully" Apr 16 01:21:20.109237 containerd[1479]: time="2026-04-16T01:21:20.109007229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-t5lq8,Uid:394e94ea-67c5-464b-b3aa-188a9710b888,Namespace:calico-system,Attempt:1,}" Apr 16 01:21:20.155392 systemd[1]: run-netns-cni\x2d4b6aa26e\x2d32a2\x2d0331\x2d215b\x2d5801e1ad5716.mount: Deactivated successfully. Apr 16 01:21:20.484544 systemd-networkd[1385]: cali7d711210f9e: Link UP Apr 16 01:21:20.485929 systemd-networkd[1385]: cali7d711210f9e: Gained carrier Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.265 [INFO][5007] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--ss6nt-eth0 coredns-7d764666f9- kube-system 9471422e-1a11-4a68-a210-42dc7d4df58a 1065 0 2026-04-16 01:20:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-ss6nt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7d711210f9e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" Namespace="kube-system" Pod="coredns-7d764666f9-ss6nt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--ss6nt-" Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.266 [INFO][5007] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" Namespace="kube-system" Pod="coredns-7d764666f9-ss6nt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.368 [INFO][5039] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" HandleID="k8s-pod-network.63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" Workload="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.389 [INFO][5039] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" HandleID="k8s-pod-network.63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" Workload="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005020f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-ss6nt", "timestamp":"2026-04-16 01:21:20.368809725 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001986e0)} Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.390 [INFO][5039] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.390 [INFO][5039] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.390 [INFO][5039] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.397 [INFO][5039] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" host="localhost" Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.414 [INFO][5039] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.429 [INFO][5039] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.438 [INFO][5039] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.446 [INFO][5039] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.446 [INFO][5039] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" host="localhost" Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.451 [INFO][5039] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73 Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.461 [INFO][5039] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" host="localhost" Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.474 [INFO][5039] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" host="localhost" Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.475 [INFO][5039] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" host="localhost" Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.475 [INFO][5039] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:20.564976 containerd[1479]: 2026-04-16 01:21:20.475 [INFO][5039] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" HandleID="k8s-pod-network.63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" Workload="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:21:20.566144 containerd[1479]: 2026-04-16 01:21:20.478 [INFO][5007] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" Namespace="kube-system" Pod="coredns-7d764666f9-ss6nt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--ss6nt-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9471422e-1a11-4a68-a210-42dc7d4df58a", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-ss6nt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d711210f9e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:20.566144 containerd[1479]: 2026-04-16 01:21:20.478 [INFO][5007] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" Namespace="kube-system" Pod="coredns-7d764666f9-ss6nt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:21:20.566144 containerd[1479]: 2026-04-16 01:21:20.478 [INFO][5007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d711210f9e ContainerID="63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" Namespace="kube-system" Pod="coredns-7d764666f9-ss6nt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:21:20.566144 containerd[1479]: 2026-04-16 01:21:20.489 [INFO][5007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" Namespace="kube-system" Pod="coredns-7d764666f9-ss6nt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:21:20.566144 containerd[1479]: 2026-04-16 01:21:20.492 [INFO][5007] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" Namespace="kube-system" Pod="coredns-7d764666f9-ss6nt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--ss6nt-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9471422e-1a11-4a68-a210-42dc7d4df58a", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73", Pod:"coredns-7d764666f9-ss6nt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d711210f9e", MAC:"86:5b:1f:eb:c0:75", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:20.566144 containerd[1479]: 2026-04-16 01:21:20.525 [INFO][5007] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73" Namespace="kube-system" Pod="coredns-7d764666f9-ss6nt" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:21:20.615218 systemd-networkd[1385]: cali727e0980222: Gained IPv6LL Apr 16 01:21:20.677245 containerd[1479]: time="2026-04-16T01:21:20.676890223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:21:20.677245 containerd[1479]: time="2026-04-16T01:21:20.677020452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:21:20.677245 containerd[1479]: time="2026-04-16T01:21:20.677038233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:20.677245 containerd[1479]: time="2026-04-16T01:21:20.677118939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:20.696211 systemd-networkd[1385]: cali746823396a7: Link UP Apr 16 01:21:20.700879 systemd-networkd[1385]: cali746823396a7: Gained carrier Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.265 [INFO][4996] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--kv5bx-eth0 coredns-7d764666f9- kube-system 925d015b-3f96-4cb8-a9c1-64b9f1a67c52 1064 0 2026-04-16 01:20:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-kv5bx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali746823396a7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" Namespace="kube-system" Pod="coredns-7d764666f9-kv5bx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--kv5bx-" Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.266 [INFO][4996] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" Namespace="kube-system" Pod="coredns-7d764666f9-kv5bx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.375 [INFO][5045] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" HandleID="k8s-pod-network.efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" Workload="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.396 [INFO][5045] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" HandleID="k8s-pod-network.efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" Workload="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367cf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-kv5bx", "timestamp":"2026-04-16 01:21:20.375095884 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000500dc0)} Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.396 [INFO][5045] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.477 [INFO][5045] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.477 [INFO][5045] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.499 [INFO][5045] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" host="localhost" Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.524 [INFO][5045] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.569 [INFO][5045] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.584 [INFO][5045] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.591 [INFO][5045] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.591 [INFO][5045] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" host="localhost" Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.596 [INFO][5045] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8 Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.621 [INFO][5045] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" host="localhost" Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.658 [INFO][5045] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" host="localhost" Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.662 [INFO][5045] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" host="localhost" Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.663 [INFO][5045] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:20.858264 containerd[1479]: 2026-04-16 01:21:20.663 [INFO][5045] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" HandleID="k8s-pod-network.efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" Workload="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:21:20.860614 containerd[1479]: 2026-04-16 01:21:20.673 [INFO][4996] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" Namespace="kube-system" Pod="coredns-7d764666f9-kv5bx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--kv5bx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"925d015b-3f96-4cb8-a9c1-64b9f1a67c52", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-kv5bx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali746823396a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:20.860614 containerd[1479]: 2026-04-16 01:21:20.674 [INFO][4996] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" Namespace="kube-system" Pod="coredns-7d764666f9-kv5bx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:21:20.860614 containerd[1479]: 2026-04-16 01:21:20.674 [INFO][4996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali746823396a7 ContainerID="efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" Namespace="kube-system" Pod="coredns-7d764666f9-kv5bx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:21:20.860614 containerd[1479]: 2026-04-16 01:21:20.733 [INFO][4996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" Namespace="kube-system" Pod="coredns-7d764666f9-kv5bx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:21:20.860614 containerd[1479]: 2026-04-16 01:21:20.740 [INFO][4996] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" Namespace="kube-system" Pod="coredns-7d764666f9-kv5bx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--kv5bx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"925d015b-3f96-4cb8-a9c1-64b9f1a67c52", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8", Pod:"coredns-7d764666f9-kv5bx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali746823396a7", MAC:"da:ec:0c:d9:2c:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:20.860614 containerd[1479]: 2026-04-16 01:21:20.841 [INFO][4996] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8" Namespace="kube-system" Pod="coredns-7d764666f9-kv5bx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:21:20.910891 systemd[1]: Started cri-containerd-63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73.scope - libcontainer container 63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73. Apr 16 01:21:20.970466 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:21:20.983661 systemd-networkd[1385]: cali8f28540cdf2: Link UP Apr 16 01:21:20.984242 systemd-networkd[1385]: cali8f28540cdf2: Gained carrier Apr 16 01:21:21.013191 containerd[1479]: time="2026-04-16T01:21:21.012127080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:21:21.013191 containerd[1479]: time="2026-04-16T01:21:21.012172119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:21:21.013191 containerd[1479]: time="2026-04-16T01:21:21.012180862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:21.013191 containerd[1479]: time="2026-04-16T01:21:21.012239424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.334 [INFO][5021] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0 goldmane-9f7667bb8- calico-system 394e94ea-67c5-464b-b3aa-188a9710b888 1063 0 2026-04-16 01:20:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-t5lq8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8f28540cdf2 [] [] }} ContainerID="30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" Namespace="calico-system" Pod="goldmane-9f7667bb8-t5lq8" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--t5lq8-" Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.335 [INFO][5021] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" Namespace="calico-system" Pod="goldmane-9f7667bb8-t5lq8" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.440 [INFO][5056] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" HandleID="k8s-pod-network.30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" Workload="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.458 [INFO][5056] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" HandleID="k8s-pod-network.30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" Workload="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038f540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-t5lq8", "timestamp":"2026-04-16 01:21:20.440863121 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003e4f20)} Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.459 [INFO][5056] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.664 [INFO][5056] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.664 [INFO][5056] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.678 [INFO][5056] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" host="localhost" Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.824 [INFO][5056] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.864 [INFO][5056] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.881 [INFO][5056] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.891 [INFO][5056] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.891 [INFO][5056] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" host="localhost" Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.898 [INFO][5056] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499 Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.926 [INFO][5056] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" host="localhost" Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.966 [INFO][5056] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" host="localhost" Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.966 [INFO][5056] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" host="localhost" Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.966 [INFO][5056] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:21.039995 containerd[1479]: 2026-04-16 01:21:20.966 [INFO][5056] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" HandleID="k8s-pod-network.30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" Workload="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:21:21.040828 containerd[1479]: 2026-04-16 01:21:20.977 [INFO][5021] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" Namespace="calico-system" Pod="goldmane-9f7667bb8-t5lq8" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"394e94ea-67c5-464b-b3aa-188a9710b888", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-t5lq8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8f28540cdf2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:21.040828 containerd[1479]: 2026-04-16 01:21:20.977 [INFO][5021] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" Namespace="calico-system" Pod="goldmane-9f7667bb8-t5lq8" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:21:21.040828 containerd[1479]: 2026-04-16 01:21:20.977 [INFO][5021] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f28540cdf2 ContainerID="30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" Namespace="calico-system" Pod="goldmane-9f7667bb8-t5lq8" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:21:21.040828 containerd[1479]: 2026-04-16 01:21:20.984 [INFO][5021] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" Namespace="calico-system" Pod="goldmane-9f7667bb8-t5lq8" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:21:21.040828 containerd[1479]: 2026-04-16 01:21:20.984 [INFO][5021] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" Namespace="calico-system" Pod="goldmane-9f7667bb8-t5lq8" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"394e94ea-67c5-464b-b3aa-188a9710b888", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499", Pod:"goldmane-9f7667bb8-t5lq8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8f28540cdf2", MAC:"5a:ab:32:c5:e4:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:21.040828 containerd[1479]: 2026-04-16 01:21:21.016 [INFO][5021] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499" Namespace="calico-system" Pod="goldmane-9f7667bb8-t5lq8" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:21:21.066887 containerd[1479]: time="2026-04-16T01:21:21.066176605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ss6nt,Uid:9471422e-1a11-4a68-a210-42dc7d4df58a,Namespace:kube-system,Attempt:1,} returns sandbox id \"63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73\"" Apr 16 01:21:21.070975 kubelet[2548]: E0416 01:21:21.070163 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:21.088805 containerd[1479]: time="2026-04-16T01:21:21.088653707Z" level=info msg="CreateContainer within sandbox \"63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 01:21:21.093465 systemd[1]: Started cri-containerd-efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8.scope - libcontainer container efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8. Apr 16 01:21:21.121131 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:21:21.163215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3524409425.mount: Deactivated successfully. Apr 16 01:21:21.182436 containerd[1479]: time="2026-04-16T01:21:21.180009886Z" level=info msg="CreateContainer within sandbox \"63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a931286be27855d36714006548f08fa752361ad0326c99117b05e66287a48cc0\"" Apr 16 01:21:21.189087 containerd[1479]: time="2026-04-16T01:21:21.185946177Z" level=info msg="StartContainer for \"a931286be27855d36714006548f08fa752361ad0326c99117b05e66287a48cc0\"" Apr 16 01:21:21.257607 containerd[1479]: time="2026-04-16T01:21:21.254483709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 01:21:21.257607 containerd[1479]: time="2026-04-16T01:21:21.254552365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 01:21:21.257607 containerd[1479]: time="2026-04-16T01:21:21.254569866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:21.257607 containerd[1479]: time="2026-04-16T01:21:21.256601235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 01:21:21.346381 containerd[1479]: time="2026-04-16T01:21:21.346349000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kv5bx,Uid:925d015b-3f96-4cb8-a9c1-64b9f1a67c52,Namespace:kube-system,Attempt:1,} returns sandbox id \"efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8\"" Apr 16 01:21:21.349786 kubelet[2548]: E0416 01:21:21.349422 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:21.368665 containerd[1479]: time="2026-04-16T01:21:21.368601311Z" level=info msg="CreateContainer within sandbox \"efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 01:21:21.409163 systemd[1]: Started cri-containerd-30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499.scope - libcontainer container 30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499. Apr 16 01:21:21.421035 containerd[1479]: time="2026-04-16T01:21:21.420887192Z" level=info msg="CreateContainer within sandbox \"efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8354d24c9e8f1e9ada4bbfe86ee7f95ad591f178efdfefb7049311c85070979d\"" Apr 16 01:21:21.424658 containerd[1479]: time="2026-04-16T01:21:21.424227683Z" level=info msg="StartContainer for \"8354d24c9e8f1e9ada4bbfe86ee7f95ad591f178efdfefb7049311c85070979d\"" Apr 16 01:21:21.453228 systemd[1]: Started cri-containerd-a931286be27855d36714006548f08fa752361ad0326c99117b05e66287a48cc0.scope - libcontainer container a931286be27855d36714006548f08fa752361ad0326c99117b05e66287a48cc0. Apr 16 01:21:21.514184 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 01:21:21.538433 containerd[1479]: time="2026-04-16T01:21:21.538128082Z" level=info msg="StartContainer for \"a931286be27855d36714006548f08fa752361ad0326c99117b05e66287a48cc0\" returns successfully" Apr 16 01:21:21.574787 systemd[1]: Started cri-containerd-8354d24c9e8f1e9ada4bbfe86ee7f95ad591f178efdfefb7049311c85070979d.scope - libcontainer container 8354d24c9e8f1e9ada4bbfe86ee7f95ad591f178efdfefb7049311c85070979d. Apr 16 01:21:21.575492 systemd-networkd[1385]: cali7d711210f9e: Gained IPv6LL Apr 16 01:21:21.685181 containerd[1479]: time="2026-04-16T01:21:21.684152091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-t5lq8,Uid:394e94ea-67c5-464b-b3aa-188a9710b888,Namespace:calico-system,Attempt:1,} returns sandbox id \"30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499\"" Apr 16 01:21:21.703977 containerd[1479]: time="2026-04-16T01:21:21.702890831Z" level=info msg="StartContainer for \"8354d24c9e8f1e9ada4bbfe86ee7f95ad591f178efdfefb7049311c85070979d\" returns successfully" Apr 16 01:21:22.407376 systemd-networkd[1385]: cali746823396a7: Gained IPv6LL Apr 16 01:21:22.417568 kubelet[2548]: E0416 01:21:22.417408 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:22.441627 kubelet[2548]: E0416 01:21:22.440845 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:22.452633 kubelet[2548]: I0416 01:21:22.452526 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-ss6nt" podStartSLOduration=79.452516564 podStartE2EDuration="1m19.452516564s" podCreationTimestamp="2026-04-16 01:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:21:22.449578668 +0000 UTC m=+84.141712956" watchObservedRunningTime="2026-04-16 01:21:22.452516564 +0000 UTC m=+84.144650857" Apr 16 01:21:22.500812 kubelet[2548]: I0416 01:21:22.499616 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-kv5bx" podStartSLOduration=79.499582056 podStartE2EDuration="1m19.499582056s" podCreationTimestamp="2026-04-16 01:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 01:21:22.494632592 +0000 UTC m=+84.186766880" watchObservedRunningTime="2026-04-16 01:21:22.499582056 +0000 UTC m=+84.191716345" Apr 16 01:21:22.591968 containerd[1479]: time="2026-04-16T01:21:22.591484286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:22.594178 containerd[1479]: time="2026-04-16T01:21:22.593866822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 16 01:21:22.597410 containerd[1479]: time="2026-04-16T01:21:22.597119729Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:22.604606 containerd[1479]: time="2026-04-16T01:21:22.604432815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:22.605624 containerd[1479]: time="2026-04-16T01:21:22.605411036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 6.624498235s" Apr 16 01:21:22.605624 containerd[1479]: time="2026-04-16T01:21:22.605438461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 01:21:22.610892 containerd[1479]: time="2026-04-16T01:21:22.609666023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 01:21:22.619039 containerd[1479]: time="2026-04-16T01:21:22.618605093Z" level=info msg="CreateContainer within sandbox \"94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 01:21:22.722031 containerd[1479]: time="2026-04-16T01:21:22.697371858Z" level=info msg="CreateContainer within sandbox \"94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bf1a975220fb54f2a620520b14696e8febb377b0c9a56ecaebd83af5300d979c\"" Apr 16 01:21:22.723990 containerd[1479]: time="2026-04-16T01:21:22.723650585Z" level=info msg="StartContainer for \"bf1a975220fb54f2a620520b14696e8febb377b0c9a56ecaebd83af5300d979c\"" Apr 16 01:21:22.794647 systemd[1]: run-containerd-runc-k8s.io-bf1a975220fb54f2a620520b14696e8febb377b0c9a56ecaebd83af5300d979c-runc.dh8lam.mount: Deactivated successfully. Apr 16 01:21:22.806990 systemd[1]: Started cri-containerd-bf1a975220fb54f2a620520b14696e8febb377b0c9a56ecaebd83af5300d979c.scope - libcontainer container bf1a975220fb54f2a620520b14696e8febb377b0c9a56ecaebd83af5300d979c. Apr 16 01:21:22.929985 containerd[1479]: time="2026-04-16T01:21:22.929876349Z" level=info msg="StartContainer for \"bf1a975220fb54f2a620520b14696e8febb377b0c9a56ecaebd83af5300d979c\" returns successfully" Apr 16 01:21:22.983929 systemd-networkd[1385]: cali8f28540cdf2: Gained IPv6LL Apr 16 01:21:23.452639 kubelet[2548]: E0416 01:21:23.452349 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:23.453406 kubelet[2548]: E0416 01:21:23.452970 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:24.466780 kubelet[2548]: E0416 01:21:24.466559 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:24.468486 kubelet[2548]: E0416 01:21:24.466461 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:24.729498 kubelet[2548]: E0416 01:21:24.728836 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:25.028395 containerd[1479]: time="2026-04-16T01:21:25.027199915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:25.030485 containerd[1479]: time="2026-04-16T01:21:25.029995459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 16 01:21:25.031531 containerd[1479]: time="2026-04-16T01:21:25.031341770Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:25.035592 containerd[1479]: time="2026-04-16T01:21:25.035545521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:25.036232 containerd[1479]: time="2026-04-16T01:21:25.036198210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.425351624s" Apr 16 01:21:25.036348 containerd[1479]: time="2026-04-16T01:21:25.036231014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 16 01:21:25.041555 containerd[1479]: time="2026-04-16T01:21:25.041157671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 01:21:25.059492 containerd[1479]: time="2026-04-16T01:21:25.059191088Z" level=info msg="CreateContainer within sandbox \"efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 01:21:25.180781 containerd[1479]: time="2026-04-16T01:21:25.180619074Z" level=info msg="CreateContainer within sandbox \"efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e138c3437aadcc49da94b88fc7c97b4f1da4e53c9766efcf911bea5958de8080\"" Apr 16 01:21:25.182810 containerd[1479]: time="2026-04-16T01:21:25.182524629Z" level=info msg="StartContainer for \"e138c3437aadcc49da94b88fc7c97b4f1da4e53c9766efcf911bea5958de8080\"" Apr 16 01:21:25.307606 systemd[1]: Started cri-containerd-e138c3437aadcc49da94b88fc7c97b4f1da4e53c9766efcf911bea5958de8080.scope - libcontainer container e138c3437aadcc49da94b88fc7c97b4f1da4e53c9766efcf911bea5958de8080. Apr 16 01:21:25.444055 containerd[1479]: time="2026-04-16T01:21:25.443645297Z" level=info msg="StartContainer for \"e138c3437aadcc49da94b88fc7c97b4f1da4e53c9766efcf911bea5958de8080\" returns successfully" Apr 16 01:21:25.499049 kubelet[2548]: I0416 01:21:25.497382 2548 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 16 01:21:25.848967 kubelet[2548]: I0416 01:21:25.848518 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-8584db774c-nfctt" podStartSLOduration=57.218436446 podStartE2EDuration="1m3.848486994s" podCreationTimestamp="2026-04-16 01:20:22 +0000 UTC" firstStartedPulling="2026-04-16 01:21:15.977459873 +0000 UTC m=+77.669594154" lastFinishedPulling="2026-04-16 01:21:22.607510407 +0000 UTC m=+84.299644702" observedRunningTime="2026-04-16 01:21:23.488085923 +0000 UTC m=+85.180220221" watchObservedRunningTime="2026-04-16 01:21:25.848486994 +0000 UTC m=+87.540621275" Apr 16 01:21:29.726150 kubelet[2548]: E0416 01:21:29.725994 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:32.105608 containerd[1479]: time="2026-04-16T01:21:32.105378185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:32.107266 containerd[1479]: time="2026-04-16T01:21:32.107076023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 16 01:21:32.111405 containerd[1479]: time="2026-04-16T01:21:32.111158801Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:32.121866 containerd[1479]: time="2026-04-16T01:21:32.121103794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:32.123462 containerd[1479]: time="2026-04-16T01:21:32.123300500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 7.081980728s" Apr 16 01:21:32.123462 containerd[1479]: time="2026-04-16T01:21:32.123417523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 16 01:21:32.133076 containerd[1479]: time="2026-04-16T01:21:32.132536591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 01:21:32.284357 containerd[1479]: time="2026-04-16T01:21:32.284160783Z" level=info msg="CreateContainer within sandbox \"96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 01:21:32.330359 containerd[1479]: time="2026-04-16T01:21:32.330306335Z" level=info msg="CreateContainer within sandbox \"96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"407adcfeb85f2ea58cc3cb0afdb7fe6b6fc4ac3074e5ae01309abf26a5edf351\"" Apr 16 01:21:32.339897 containerd[1479]: time="2026-04-16T01:21:32.338353220Z" level=info msg="StartContainer for \"407adcfeb85f2ea58cc3cb0afdb7fe6b6fc4ac3074e5ae01309abf26a5edf351\"" Apr 16 01:21:32.572094 systemd[1]: Started cri-containerd-407adcfeb85f2ea58cc3cb0afdb7fe6b6fc4ac3074e5ae01309abf26a5edf351.scope - libcontainer container 407adcfeb85f2ea58cc3cb0afdb7fe6b6fc4ac3074e5ae01309abf26a5edf351. Apr 16 01:21:32.769320 containerd[1479]: time="2026-04-16T01:21:32.769186861Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:32.774011 containerd[1479]: time="2026-04-16T01:21:32.771402530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 16 01:21:32.776605 containerd[1479]: time="2026-04-16T01:21:32.776520317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 643.631714ms" Apr 16 01:21:32.776605 containerd[1479]: time="2026-04-16T01:21:32.776551415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 01:21:32.784867 containerd[1479]: time="2026-04-16T01:21:32.783384532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 01:21:32.806340 containerd[1479]: time="2026-04-16T01:21:32.806272236Z" level=info msg="CreateContainer within sandbox \"bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 01:21:32.850273 containerd[1479]: time="2026-04-16T01:21:32.849620962Z" level=info msg="CreateContainer within sandbox \"bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f28d075fb8a722257824209d7fb2c71cf16c054f15f7c343116637a6b830cf7c\"" Apr 16 01:21:32.853301 containerd[1479]: time="2026-04-16T01:21:32.852944879Z" level=info msg="StartContainer for \"f28d075fb8a722257824209d7fb2c71cf16c054f15f7c343116637a6b830cf7c\"" Apr 16 01:21:32.922443 containerd[1479]: time="2026-04-16T01:21:32.920957922Z" level=info msg="StartContainer for \"407adcfeb85f2ea58cc3cb0afdb7fe6b6fc4ac3074e5ae01309abf26a5edf351\" returns successfully" Apr 16 01:21:32.988300 systemd[1]: Started cri-containerd-f28d075fb8a722257824209d7fb2c71cf16c054f15f7c343116637a6b830cf7c.scope - libcontainer container f28d075fb8a722257824209d7fb2c71cf16c054f15f7c343116637a6b830cf7c. Apr 16 01:21:33.100172 containerd[1479]: time="2026-04-16T01:21:33.099839487Z" level=info msg="StartContainer for \"f28d075fb8a722257824209d7fb2c71cf16c054f15f7c343116637a6b830cf7c\" returns successfully" Apr 16 01:21:33.672935 kubelet[2548]: I0416 01:21:33.671912 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-8584db774c-7qbkw" podStartSLOduration=58.19813742 podStartE2EDuration="1m11.671899105s" podCreationTimestamp="2026-04-16 01:20:22 +0000 UTC" firstStartedPulling="2026-04-16 01:21:19.307367218 +0000 UTC m=+80.999501511" lastFinishedPulling="2026-04-16 01:21:32.781128911 +0000 UTC m=+94.473263196" observedRunningTime="2026-04-16 01:21:33.670427425 +0000 UTC m=+95.362561711" watchObservedRunningTime="2026-04-16 01:21:33.671899105 +0000 UTC m=+95.364033398" Apr 16 01:21:33.846270 systemd[1]: run-containerd-runc-k8s.io-407adcfeb85f2ea58cc3cb0afdb7fe6b6fc4ac3074e5ae01309abf26a5edf351-runc.LET1Cj.mount: Deactivated successfully. Apr 16 01:21:34.010366 kubelet[2548]: I0416 01:21:34.008666 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67f84cb8bf-5s4sf" podStartSLOduration=56.055250989 podStartE2EDuration="1m10.008649929s" podCreationTimestamp="2026-04-16 01:20:24 +0000 UTC" firstStartedPulling="2026-04-16 01:21:18.176244133 +0000 UTC m=+79.868378428" lastFinishedPulling="2026-04-16 01:21:32.129643086 +0000 UTC m=+93.821777368" observedRunningTime="2026-04-16 01:21:33.840592613 +0000 UTC m=+95.532726922" watchObservedRunningTime="2026-04-16 01:21:34.008649929 +0000 UTC m=+95.700784224" Apr 16 01:21:37.767572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount553649879.mount: Deactivated successfully. Apr 16 01:21:39.391925 containerd[1479]: time="2026-04-16T01:21:39.391567125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:39.393620 containerd[1479]: time="2026-04-16T01:21:39.393341593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 16 01:21:39.395625 containerd[1479]: time="2026-04-16T01:21:39.395524872Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:39.398304 containerd[1479]: time="2026-04-16T01:21:39.398154943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:39.398961 containerd[1479]: time="2026-04-16T01:21:39.398878098Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 6.615461016s" Apr 16 01:21:39.399004 containerd[1479]: time="2026-04-16T01:21:39.398962749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 16 01:21:39.403143 containerd[1479]: time="2026-04-16T01:21:39.402894478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 01:21:39.414651 containerd[1479]: time="2026-04-16T01:21:39.414155110Z" level=info msg="CreateContainer within sandbox \"30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 01:21:39.444915 containerd[1479]: time="2026-04-16T01:21:39.444341834Z" level=info msg="CreateContainer within sandbox \"30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2fb7a90a7d3892c694c77b2c85fd0450372c54fa7f4c56bffedc6bca89c85d93\"" Apr 16 01:21:39.444915 containerd[1479]: time="2026-04-16T01:21:39.445176948Z" level=info msg="StartContainer for \"2fb7a90a7d3892c694c77b2c85fd0450372c54fa7f4c56bffedc6bca89c85d93\"" Apr 16 01:21:39.541086 systemd[1]: Started cri-containerd-2fb7a90a7d3892c694c77b2c85fd0450372c54fa7f4c56bffedc6bca89c85d93.scope - libcontainer container 2fb7a90a7d3892c694c77b2c85fd0450372c54fa7f4c56bffedc6bca89c85d93. Apr 16 01:21:39.745145 containerd[1479]: time="2026-04-16T01:21:39.744897192Z" level=info msg="StartContainer for \"2fb7a90a7d3892c694c77b2c85fd0450372c54fa7f4c56bffedc6bca89c85d93\" returns successfully" Apr 16 01:21:41.009181 systemd[1]: run-containerd-runc-k8s.io-2fb7a90a7d3892c694c77b2c85fd0450372c54fa7f4c56bffedc6bca89c85d93-runc.GWnGfk.mount: Deactivated successfully. Apr 16 01:21:41.737948 kubelet[2548]: E0416 01:21:41.737479 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:43.721496 containerd[1479]: time="2026-04-16T01:21:43.721388302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:43.724848 containerd[1479]: time="2026-04-16T01:21:43.724788684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 16 01:21:43.730550 containerd[1479]: time="2026-04-16T01:21:43.730498386Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:43.741929 containerd[1479]: time="2026-04-16T01:21:43.741037756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 01:21:43.742489 containerd[1479]: time="2026-04-16T01:21:43.742402894Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 4.33947166s" Apr 16 01:21:43.742489 containerd[1479]: time="2026-04-16T01:21:43.742486969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 16 01:21:43.853342 containerd[1479]: time="2026-04-16T01:21:43.852516433Z" level=info msg="CreateContainer within sandbox \"efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 01:21:43.909549 containerd[1479]: time="2026-04-16T01:21:43.909300866Z" level=info msg="CreateContainer within sandbox \"efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5a6db0f9cfd3a88fd80f563f147e0891272bc0e8365521db932af4d31b671ce1\"" Apr 16 01:21:43.918100 containerd[1479]: time="2026-04-16T01:21:43.917446017Z" level=info msg="StartContainer for \"5a6db0f9cfd3a88fd80f563f147e0891272bc0e8365521db932af4d31b671ce1\"" Apr 16 01:21:44.106443 systemd[1]: Started cri-containerd-5a6db0f9cfd3a88fd80f563f147e0891272bc0e8365521db932af4d31b671ce1.scope - libcontainer container 5a6db0f9cfd3a88fd80f563f147e0891272bc0e8365521db932af4d31b671ce1. Apr 16 01:21:44.331967 containerd[1479]: time="2026-04-16T01:21:44.331929562Z" level=info msg="StartContainer for \"5a6db0f9cfd3a88fd80f563f147e0891272bc0e8365521db932af4d31b671ce1\" returns successfully" Apr 16 01:21:45.151015 kubelet[2548]: I0416 01:21:45.149847 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-b2mtc" podStartSLOduration=55.534529317 podStartE2EDuration="1m22.149608739s" podCreationTimestamp="2026-04-16 01:20:23 +0000 UTC" firstStartedPulling="2026-04-16 01:21:17.130889012 +0000 UTC m=+78.823023294" lastFinishedPulling="2026-04-16 01:21:43.745968428 +0000 UTC m=+105.438102716" observedRunningTime="2026-04-16 01:21:45.144195379 +0000 UTC m=+106.836329671" watchObservedRunningTime="2026-04-16 01:21:45.149608739 +0000 UTC m=+106.841743033" Apr 16 01:21:45.151015 kubelet[2548]: I0416 01:21:45.150455 2548 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-t5lq8" podStartSLOduration=64.44137562 podStartE2EDuration="1m22.150442944s" podCreationTimestamp="2026-04-16 01:20:23 +0000 UTC" firstStartedPulling="2026-04-16 01:21:21.693392177 +0000 UTC m=+83.385526462" lastFinishedPulling="2026-04-16 01:21:39.402459504 +0000 UTC m=+101.094593786" observedRunningTime="2026-04-16 01:21:39.920573024 +0000 UTC m=+101.612707307" watchObservedRunningTime="2026-04-16 01:21:45.150442944 +0000 UTC m=+106.842577241" Apr 16 01:21:45.621890 kubelet[2548]: I0416 01:21:45.621468 2548 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 01:21:45.623070 kubelet[2548]: I0416 01:21:45.622914 2548 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 01:21:45.729572 kubelet[2548]: E0416 01:21:45.729431 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:21:58.614346 containerd[1479]: time="2026-04-16T01:21:58.613583308Z" level=info msg="StopPodSandbox for \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\"" Apr 16 01:21:59.310985 containerd[1479]: 2026-04-16 01:21:58.848 [WARNING][5783] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"394e94ea-67c5-464b-b3aa-188a9710b888", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499", Pod:"goldmane-9f7667bb8-t5lq8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8f28540cdf2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:21:59.310985 containerd[1479]: 2026-04-16 01:21:58.852 [INFO][5783] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Apr 16 01:21:59.310985 containerd[1479]: 2026-04-16 01:21:58.852 [INFO][5783] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" iface="eth0" netns="" Apr 16 01:21:59.310985 containerd[1479]: 2026-04-16 01:21:58.852 [INFO][5783] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Apr 16 01:21:59.310985 containerd[1479]: 2026-04-16 01:21:58.852 [INFO][5783] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Apr 16 01:21:59.310985 containerd[1479]: 2026-04-16 01:21:59.240 [INFO][5794] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" HandleID="k8s-pod-network.d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Workload="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:21:59.310985 containerd[1479]: 2026-04-16 01:21:59.242 [INFO][5794] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:21:59.310985 containerd[1479]: 2026-04-16 01:21:59.243 [INFO][5794] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:21:59.310985 containerd[1479]: 2026-04-16 01:21:59.287 [WARNING][5794] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" HandleID="k8s-pod-network.d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Workload="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:21:59.310985 containerd[1479]: 2026-04-16 01:21:59.287 [INFO][5794] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" HandleID="k8s-pod-network.d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Workload="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:21:59.310985 containerd[1479]: 2026-04-16 01:21:59.297 [INFO][5794] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:21:59.310985 containerd[1479]: 2026-04-16 01:21:59.307 [INFO][5783] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Apr 16 01:21:59.326033 containerd[1479]: time="2026-04-16T01:21:59.325245040Z" level=info msg="TearDown network for sandbox \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\" successfully" Apr 16 01:21:59.326033 containerd[1479]: time="2026-04-16T01:21:59.325378516Z" level=info msg="StopPodSandbox for \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\" returns successfully" Apr 16 01:21:59.475509 containerd[1479]: time="2026-04-16T01:21:59.474963899Z" level=info msg="RemovePodSandbox for \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\"" Apr 16 01:21:59.490192 containerd[1479]: time="2026-04-16T01:21:59.489119427Z" level=info msg="Forcibly stopping sandbox \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\"" Apr 16 01:22:00.135155 containerd[1479]: 2026-04-16 01:21:59.843 [WARNING][5813] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"394e94ea-67c5-464b-b3aa-188a9710b888", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"30f697593804a8b63f8d851c88e78390e06b34b9a78f8391d0bf2e626c56f499", Pod:"goldmane-9f7667bb8-t5lq8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8f28540cdf2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:22:00.135155 containerd[1479]: 2026-04-16 01:21:59.849 [INFO][5813] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Apr 16 01:22:00.135155 containerd[1479]: 2026-04-16 01:21:59.849 [INFO][5813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" iface="eth0" netns="" Apr 16 01:22:00.135155 containerd[1479]: 2026-04-16 01:21:59.849 [INFO][5813] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Apr 16 01:22:00.135155 containerd[1479]: 2026-04-16 01:21:59.849 [INFO][5813] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Apr 16 01:22:00.135155 containerd[1479]: 2026-04-16 01:22:00.063 [INFO][5822] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" HandleID="k8s-pod-network.d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Workload="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:22:00.135155 containerd[1479]: 2026-04-16 01:22:00.068 [INFO][5822] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:22:00.135155 containerd[1479]: 2026-04-16 01:22:00.068 [INFO][5822] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:22:00.135155 containerd[1479]: 2026-04-16 01:22:00.097 [WARNING][5822] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" HandleID="k8s-pod-network.d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Workload="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:22:00.135155 containerd[1479]: 2026-04-16 01:22:00.097 [INFO][5822] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" HandleID="k8s-pod-network.d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Workload="localhost-k8s-goldmane--9f7667bb8--t5lq8-eth0" Apr 16 01:22:00.135155 containerd[1479]: 2026-04-16 01:22:00.111 [INFO][5822] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:22:00.135155 containerd[1479]: 2026-04-16 01:22:00.127 [INFO][5813] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640" Apr 16 01:22:00.145563 containerd[1479]: time="2026-04-16T01:22:00.135431768Z" level=info msg="TearDown network for sandbox \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\" successfully" Apr 16 01:22:00.247807 containerd[1479]: time="2026-04-16T01:22:00.247389184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:22:00.248834 containerd[1479]: time="2026-04-16T01:22:00.248062682Z" level=info msg="RemovePodSandbox \"d0f609cf62ba2451b793e0f169919cf887db6d1e23320a4807fe5b50180f1640\" returns successfully" Apr 16 01:22:00.272589 containerd[1479]: time="2026-04-16T01:22:00.272272558Z" level=info msg="StopPodSandbox for \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\"" Apr 16 01:22:00.662923 containerd[1479]: 2026-04-16 01:22:00.435 [WARNING][5840] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0", GenerateName:"calico-kube-controllers-67f84cb8bf-", Namespace:"calico-system", SelfLink:"", UID:"822d1a89-65ea-4c5d-be46-0a8d4c12be69", ResourceVersion:"1171", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f84cb8bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c", Pod:"calico-kube-controllers-67f84cb8bf-5s4sf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefc81431c55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:22:00.662923 containerd[1479]: 2026-04-16 01:22:00.437 [INFO][5840] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Apr 16 01:22:00.662923 containerd[1479]: 2026-04-16 01:22:00.437 [INFO][5840] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" iface="eth0" netns="" Apr 16 01:22:00.662923 containerd[1479]: 2026-04-16 01:22:00.437 [INFO][5840] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Apr 16 01:22:00.662923 containerd[1479]: 2026-04-16 01:22:00.437 [INFO][5840] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Apr 16 01:22:00.662923 containerd[1479]: 2026-04-16 01:22:00.610 [INFO][5848] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" HandleID="k8s-pod-network.a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Workload="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:22:00.662923 containerd[1479]: 2026-04-16 01:22:00.611 [INFO][5848] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:22:00.662923 containerd[1479]: 2026-04-16 01:22:00.611 [INFO][5848] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:22:00.662923 containerd[1479]: 2026-04-16 01:22:00.636 [WARNING][5848] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" HandleID="k8s-pod-network.a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Workload="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:22:00.662923 containerd[1479]: 2026-04-16 01:22:00.640 [INFO][5848] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" HandleID="k8s-pod-network.a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Workload="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:22:00.662923 containerd[1479]: 2026-04-16 01:22:00.649 [INFO][5848] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:22:00.662923 containerd[1479]: 2026-04-16 01:22:00.658 [INFO][5840] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Apr 16 01:22:00.664088 containerd[1479]: time="2026-04-16T01:22:00.663082602Z" level=info msg="TearDown network for sandbox \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\" successfully" Apr 16 01:22:00.664088 containerd[1479]: time="2026-04-16T01:22:00.663105518Z" level=info msg="StopPodSandbox for \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\" returns successfully" Apr 16 01:22:00.664278 containerd[1479]: time="2026-04-16T01:22:00.664177198Z" level=info msg="RemovePodSandbox for \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\"" Apr 16 01:22:00.664420 containerd[1479]: time="2026-04-16T01:22:00.664280201Z" level=info msg="Forcibly stopping sandbox \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\"" Apr 16 01:22:01.041218 containerd[1479]: 2026-04-16 01:22:00.887 [WARNING][5866] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0", GenerateName:"calico-kube-controllers-67f84cb8bf-", Namespace:"calico-system", SelfLink:"", UID:"822d1a89-65ea-4c5d-be46-0a8d4c12be69", ResourceVersion:"1171", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f84cb8bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96f3f9db0b3647685b3de2259854bd6f99e7e790e2fc21e772a569c85126385c", Pod:"calico-kube-controllers-67f84cb8bf-5s4sf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefc81431c55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:22:01.041218 containerd[1479]: 2026-04-16 01:22:00.889 [INFO][5866] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Apr 16 01:22:01.041218 containerd[1479]: 2026-04-16 01:22:00.889 [INFO][5866] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" iface="eth0" netns="" Apr 16 01:22:01.041218 containerd[1479]: 2026-04-16 01:22:00.889 [INFO][5866] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Apr 16 01:22:01.041218 containerd[1479]: 2026-04-16 01:22:00.889 [INFO][5866] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Apr 16 01:22:01.041218 containerd[1479]: 2026-04-16 01:22:00.995 [INFO][5875] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" HandleID="k8s-pod-network.a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Workload="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:22:01.041218 containerd[1479]: 2026-04-16 01:22:00.996 [INFO][5875] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:22:01.041218 containerd[1479]: 2026-04-16 01:22:00.996 [INFO][5875] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:22:01.041218 containerd[1479]: 2026-04-16 01:22:01.018 [WARNING][5875] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" HandleID="k8s-pod-network.a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Workload="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:22:01.041218 containerd[1479]: 2026-04-16 01:22:01.018 [INFO][5875] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" HandleID="k8s-pod-network.a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Workload="localhost-k8s-calico--kube--controllers--67f84cb8bf--5s4sf-eth0" Apr 16 01:22:01.041218 containerd[1479]: 2026-04-16 01:22:01.029 [INFO][5875] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:22:01.041218 containerd[1479]: 2026-04-16 01:22:01.033 [INFO][5866] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8" Apr 16 01:22:01.041218 containerd[1479]: time="2026-04-16T01:22:01.040654077Z" level=info msg="TearDown network for sandbox \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\" successfully" Apr 16 01:22:01.054632 containerd[1479]: time="2026-04-16T01:22:01.054433695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:22:01.054632 containerd[1479]: time="2026-04-16T01:22:01.054820849Z" level=info msg="RemovePodSandbox \"a59e0e2c5772fb35384b79900afb83bb329d28e9084a0357b45aa02a43a113e8\" returns successfully" Apr 16 01:22:01.056121 containerd[1479]: time="2026-04-16T01:22:01.055967719Z" level=info msg="StopPodSandbox for \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\"" Apr 16 01:22:01.551609 containerd[1479]: 2026-04-16 01:22:01.288 [WARNING][5893] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b2mtc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36e0b1c3-35d3-4f7d-a631-c6ac0e723311", ResourceVersion:"1235", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942", Pod:"csi-node-driver-b2mtc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0d98d68a14f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:22:01.551609 containerd[1479]: 2026-04-16 01:22:01.289 [INFO][5893] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Apr 16 01:22:01.551609 containerd[1479]: 2026-04-16 01:22:01.289 [INFO][5893] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" iface="eth0" netns="" Apr 16 01:22:01.551609 containerd[1479]: 2026-04-16 01:22:01.289 [INFO][5893] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Apr 16 01:22:01.551609 containerd[1479]: 2026-04-16 01:22:01.289 [INFO][5893] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Apr 16 01:22:01.551609 containerd[1479]: 2026-04-16 01:22:01.493 [INFO][5901] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" HandleID="k8s-pod-network.21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Workload="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:22:01.551609 containerd[1479]: 2026-04-16 01:22:01.494 [INFO][5901] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:22:01.551609 containerd[1479]: 2026-04-16 01:22:01.494 [INFO][5901] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:22:01.551609 containerd[1479]: 2026-04-16 01:22:01.521 [WARNING][5901] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" HandleID="k8s-pod-network.21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Workload="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:22:01.551609 containerd[1479]: 2026-04-16 01:22:01.521 [INFO][5901] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" HandleID="k8s-pod-network.21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Workload="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:22:01.551609 containerd[1479]: 2026-04-16 01:22:01.535 [INFO][5901] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:22:01.551609 containerd[1479]: 2026-04-16 01:22:01.542 [INFO][5893] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Apr 16 01:22:01.559617 containerd[1479]: time="2026-04-16T01:22:01.551825998Z" level=info msg="TearDown network for sandbox \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\" successfully" Apr 16 01:22:01.559617 containerd[1479]: time="2026-04-16T01:22:01.551870983Z" level=info msg="StopPodSandbox for \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\" returns successfully" Apr 16 01:22:01.559617 containerd[1479]: time="2026-04-16T01:22:01.556118917Z" level=info msg="RemovePodSandbox for \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\"" Apr 16 01:22:01.559617 containerd[1479]: time="2026-04-16T01:22:01.556154753Z" level=info msg="Forcibly stopping sandbox \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\"" Apr 16 01:22:01.890844 containerd[1479]: 2026-04-16 01:22:01.727 [WARNING][5923] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b2mtc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36e0b1c3-35d3-4f7d-a631-c6ac0e723311", ResourceVersion:"1235", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efb5872695abcbc92e456b79354ea473ba0e4eb2df861d651ea31ddb1e616942", Pod:"csi-node-driver-b2mtc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0d98d68a14f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:22:01.890844 containerd[1479]: 2026-04-16 01:22:01.729 [INFO][5923] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Apr 16 01:22:01.890844 containerd[1479]: 2026-04-16 01:22:01.729 [INFO][5923] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" iface="eth0" netns="" Apr 16 01:22:01.890844 containerd[1479]: 2026-04-16 01:22:01.729 [INFO][5923] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Apr 16 01:22:01.890844 containerd[1479]: 2026-04-16 01:22:01.729 [INFO][5923] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Apr 16 01:22:01.890844 containerd[1479]: 2026-04-16 01:22:01.852 [INFO][5934] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" HandleID="k8s-pod-network.21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Workload="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:22:01.890844 containerd[1479]: 2026-04-16 01:22:01.853 [INFO][5934] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:22:01.890844 containerd[1479]: 2026-04-16 01:22:01.853 [INFO][5934] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:22:01.890844 containerd[1479]: 2026-04-16 01:22:01.869 [WARNING][5934] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" HandleID="k8s-pod-network.21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Workload="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:22:01.890844 containerd[1479]: 2026-04-16 01:22:01.869 [INFO][5934] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" HandleID="k8s-pod-network.21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Workload="localhost-k8s-csi--node--driver--b2mtc-eth0" Apr 16 01:22:01.890844 containerd[1479]: 2026-04-16 01:22:01.881 [INFO][5934] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:22:01.890844 containerd[1479]: 2026-04-16 01:22:01.886 [INFO][5923] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e" Apr 16 01:22:01.897405 containerd[1479]: time="2026-04-16T01:22:01.891664989Z" level=info msg="TearDown network for sandbox \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\" successfully" Apr 16 01:22:01.913257 containerd[1479]: time="2026-04-16T01:22:01.912975297Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:22:01.913257 containerd[1479]: time="2026-04-16T01:22:01.913220010Z" level=info msg="RemovePodSandbox \"21e8ff37ca63ead3bf838e9da5da31fa764cfb74c3028b6482f0b847ac03c49e\" returns successfully" Apr 16 01:22:01.914619 containerd[1479]: time="2026-04-16T01:22:01.914201373Z" level=info msg="StopPodSandbox for \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\"" Apr 16 01:22:02.319514 containerd[1479]: 2026-04-16 01:22:02.166 [WARNING][5952] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0", GenerateName:"calico-apiserver-8584db774c-", Namespace:"calico-system", SelfLink:"", UID:"794b6563-cebd-4102-a96c-e20a75901f97", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8584db774c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e", Pod:"calico-apiserver-8584db774c-nfctt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie67cbe68a78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:22:02.319514 containerd[1479]: 2026-04-16 01:22:02.169 [INFO][5952] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Apr 16 01:22:02.319514 containerd[1479]: 2026-04-16 01:22:02.169 [INFO][5952] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" iface="eth0" netns="" Apr 16 01:22:02.319514 containerd[1479]: 2026-04-16 01:22:02.169 [INFO][5952] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Apr 16 01:22:02.319514 containerd[1479]: 2026-04-16 01:22:02.169 [INFO][5952] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Apr 16 01:22:02.319514 containerd[1479]: 2026-04-16 01:22:02.274 [INFO][5971] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" HandleID="k8s-pod-network.dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Workload="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:22:02.319514 containerd[1479]: 2026-04-16 01:22:02.275 [INFO][5971] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:22:02.319514 containerd[1479]: 2026-04-16 01:22:02.275 [INFO][5971] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:22:02.319514 containerd[1479]: 2026-04-16 01:22:02.297 [WARNING][5971] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" HandleID="k8s-pod-network.dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Workload="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:22:02.319514 containerd[1479]: 2026-04-16 01:22:02.297 [INFO][5971] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" HandleID="k8s-pod-network.dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Workload="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:22:02.319514 containerd[1479]: 2026-04-16 01:22:02.308 [INFO][5971] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:22:02.319514 containerd[1479]: 2026-04-16 01:22:02.314 [INFO][5952] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Apr 16 01:22:02.319514 containerd[1479]: time="2026-04-16T01:22:02.318959690Z" level=info msg="TearDown network for sandbox \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\" successfully" Apr 16 01:22:02.319514 containerd[1479]: time="2026-04-16T01:22:02.318980935Z" level=info msg="StopPodSandbox for \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\" returns successfully" Apr 16 01:22:02.322434 containerd[1479]: time="2026-04-16T01:22:02.322196080Z" level=info msg="RemovePodSandbox for \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\"" Apr 16 01:22:02.322434 containerd[1479]: time="2026-04-16T01:22:02.322220170Z" level=info msg="Forcibly stopping sandbox \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\"" Apr 16 01:22:02.807606 containerd[1479]: 2026-04-16 01:22:02.494 [WARNING][6021] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0", GenerateName:"calico-apiserver-8584db774c-", Namespace:"calico-system", SelfLink:"", UID:"794b6563-cebd-4102-a96c-e20a75901f97", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8584db774c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94110af7b9c85f3522b2cad3bb9e9b317bece1a792cf194ea522693485409d4e", Pod:"calico-apiserver-8584db774c-nfctt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie67cbe68a78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:22:02.807606 containerd[1479]: 2026-04-16 01:22:02.496 [INFO][6021] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Apr 16 01:22:02.807606 containerd[1479]: 2026-04-16 01:22:02.497 [INFO][6021] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" iface="eth0" netns="" Apr 16 01:22:02.807606 containerd[1479]: 2026-04-16 01:22:02.497 [INFO][6021] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Apr 16 01:22:02.807606 containerd[1479]: 2026-04-16 01:22:02.497 [INFO][6021] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Apr 16 01:22:02.807606 containerd[1479]: 2026-04-16 01:22:02.618 [INFO][6031] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" HandleID="k8s-pod-network.dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Workload="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:22:02.807606 containerd[1479]: 2026-04-16 01:22:02.618 [INFO][6031] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:22:02.807606 containerd[1479]: 2026-04-16 01:22:02.618 [INFO][6031] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:22:02.807606 containerd[1479]: 2026-04-16 01:22:02.647 [WARNING][6031] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" HandleID="k8s-pod-network.dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Workload="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:22:02.807606 containerd[1479]: 2026-04-16 01:22:02.648 [INFO][6031] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" HandleID="k8s-pod-network.dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Workload="localhost-k8s-calico--apiserver--8584db774c--nfctt-eth0" Apr 16 01:22:02.807606 containerd[1479]: 2026-04-16 01:22:02.660 [INFO][6031] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:22:02.807606 containerd[1479]: 2026-04-16 01:22:02.746 [INFO][6021] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d" Apr 16 01:22:02.813253 containerd[1479]: time="2026-04-16T01:22:02.807915684Z" level=info msg="TearDown network for sandbox \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\" successfully" Apr 16 01:22:02.831926 containerd[1479]: time="2026-04-16T01:22:02.830880242Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:22:02.831926 containerd[1479]: time="2026-04-16T01:22:02.831465293Z" level=info msg="RemovePodSandbox \"dc960b7b761a84e0f5415d6c72b400955d208003b44ac8de1a397a445a12a53d\" returns successfully" Apr 16 01:22:02.835101 containerd[1479]: time="2026-04-16T01:22:02.834951885Z" level=info msg="StopPodSandbox for \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\"" Apr 16 01:22:03.098025 containerd[1479]: 2026-04-16 01:22:02.970 [WARNING][6048] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0", GenerateName:"calico-apiserver-8584db774c-", Namespace:"calico-system", SelfLink:"", UID:"b5113220-c80b-4e1b-afea-3fb7f3d652bc", ResourceVersion:"1182", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8584db774c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b", Pod:"calico-apiserver-8584db774c-7qbkw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali727e0980222", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:22:03.098025 containerd[1479]: 2026-04-16 01:22:02.971 [INFO][6048] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Apr 16 01:22:03.098025 containerd[1479]: 2026-04-16 01:22:02.971 [INFO][6048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" iface="eth0" netns="" Apr 16 01:22:03.098025 containerd[1479]: 2026-04-16 01:22:02.971 [INFO][6048] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Apr 16 01:22:03.098025 containerd[1479]: 2026-04-16 01:22:02.971 [INFO][6048] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Apr 16 01:22:03.098025 containerd[1479]: 2026-04-16 01:22:03.069 [INFO][6057] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" HandleID="k8s-pod-network.111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Workload="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:22:03.098025 containerd[1479]: 2026-04-16 01:22:03.069 [INFO][6057] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:22:03.098025 containerd[1479]: 2026-04-16 01:22:03.069 [INFO][6057] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:22:03.098025 containerd[1479]: 2026-04-16 01:22:03.084 [WARNING][6057] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" HandleID="k8s-pod-network.111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Workload="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:22:03.098025 containerd[1479]: 2026-04-16 01:22:03.085 [INFO][6057] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" HandleID="k8s-pod-network.111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Workload="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:22:03.098025 containerd[1479]: 2026-04-16 01:22:03.092 [INFO][6057] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:22:03.098025 containerd[1479]: 2026-04-16 01:22:03.094 [INFO][6048] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Apr 16 01:22:03.098604 containerd[1479]: time="2026-04-16T01:22:03.098106877Z" level=info msg="TearDown network for sandbox \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\" successfully" Apr 16 01:22:03.098604 containerd[1479]: time="2026-04-16T01:22:03.098130496Z" level=info msg="StopPodSandbox for \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\" returns successfully" Apr 16 01:22:03.100164 containerd[1479]: time="2026-04-16T01:22:03.099911314Z" level=info msg="RemovePodSandbox for \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\"" Apr 16 01:22:03.100164 containerd[1479]: time="2026-04-16T01:22:03.099944815Z" level=info msg="Forcibly stopping sandbox \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\"" Apr 16 01:22:03.419796 containerd[1479]: 2026-04-16 01:22:03.314 [WARNING][6075] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0", GenerateName:"calico-apiserver-8584db774c-", Namespace:"calico-system", SelfLink:"", UID:"b5113220-c80b-4e1b-afea-3fb7f3d652bc", ResourceVersion:"1182", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8584db774c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bebdc4a3dd026d34b0ab1bad0cc1d6a39714fbd3072cfb70f3d0a0e0882b732b", Pod:"calico-apiserver-8584db774c-7qbkw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali727e0980222", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:22:03.419796 containerd[1479]: 2026-04-16 01:22:03.317 [INFO][6075] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Apr 16 01:22:03.419796 containerd[1479]: 2026-04-16 01:22:03.317 [INFO][6075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" iface="eth0" netns="" Apr 16 01:22:03.419796 containerd[1479]: 2026-04-16 01:22:03.317 [INFO][6075] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Apr 16 01:22:03.419796 containerd[1479]: 2026-04-16 01:22:03.317 [INFO][6075] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Apr 16 01:22:03.419796 containerd[1479]: 2026-04-16 01:22:03.385 [INFO][6084] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" HandleID="k8s-pod-network.111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Workload="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:22:03.419796 containerd[1479]: 2026-04-16 01:22:03.386 [INFO][6084] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:22:03.419796 containerd[1479]: 2026-04-16 01:22:03.386 [INFO][6084] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:22:03.419796 containerd[1479]: 2026-04-16 01:22:03.405 [WARNING][6084] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" HandleID="k8s-pod-network.111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Workload="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:22:03.419796 containerd[1479]: 2026-04-16 01:22:03.406 [INFO][6084] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" HandleID="k8s-pod-network.111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Workload="localhost-k8s-calico--apiserver--8584db774c--7qbkw-eth0" Apr 16 01:22:03.419796 containerd[1479]: 2026-04-16 01:22:03.412 [INFO][6084] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:22:03.419796 containerd[1479]: 2026-04-16 01:22:03.416 [INFO][6075] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b" Apr 16 01:22:03.419796 containerd[1479]: time="2026-04-16T01:22:03.419448980Z" level=info msg="TearDown network for sandbox \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\" successfully" Apr 16 01:22:03.457399 containerd[1479]: time="2026-04-16T01:22:03.457150525Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:22:03.457399 containerd[1479]: time="2026-04-16T01:22:03.457388227Z" level=info msg="RemovePodSandbox \"111469dd43a8e2e72a5279cc7f65b1f9f8492045f8f2fce24cbc2f3376245b3b\" returns successfully" Apr 16 01:22:03.463965 containerd[1479]: time="2026-04-16T01:22:03.460197022Z" level=info msg="StopPodSandbox for \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\"" Apr 16 01:22:03.768987 containerd[1479]: 2026-04-16 01:22:03.589 [WARNING][6101] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--kv5bx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"925d015b-3f96-4cb8-a9c1-64b9f1a67c52", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8", Pod:"coredns-7d764666f9-kv5bx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali746823396a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:22:03.768987 containerd[1479]: 2026-04-16 01:22:03.591 [INFO][6101] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Apr 16 01:22:03.768987 containerd[1479]: 2026-04-16 01:22:03.591 [INFO][6101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" iface="eth0" netns="" Apr 16 01:22:03.768987 containerd[1479]: 2026-04-16 01:22:03.591 [INFO][6101] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Apr 16 01:22:03.768987 containerd[1479]: 2026-04-16 01:22:03.591 [INFO][6101] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Apr 16 01:22:03.768987 containerd[1479]: 2026-04-16 01:22:03.717 [INFO][6109] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" HandleID="k8s-pod-network.464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Workload="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:22:03.768987 containerd[1479]: 2026-04-16 01:22:03.717 [INFO][6109] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:22:03.768987 containerd[1479]: 2026-04-16 01:22:03.717 [INFO][6109] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:22:03.768987 containerd[1479]: 2026-04-16 01:22:03.734 [WARNING][6109] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" HandleID="k8s-pod-network.464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Workload="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:22:03.768987 containerd[1479]: 2026-04-16 01:22:03.735 [INFO][6109] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" HandleID="k8s-pod-network.464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Workload="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:22:03.768987 containerd[1479]: 2026-04-16 01:22:03.749 [INFO][6109] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:22:03.768987 containerd[1479]: 2026-04-16 01:22:03.760 [INFO][6101] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Apr 16 01:22:03.768987 containerd[1479]: time="2026-04-16T01:22:03.768561329Z" level=info msg="TearDown network for sandbox \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\" successfully" Apr 16 01:22:03.768987 containerd[1479]: time="2026-04-16T01:22:03.768602641Z" level=info msg="StopPodSandbox for \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\" returns successfully" Apr 16 01:22:03.777052 containerd[1479]: time="2026-04-16T01:22:03.774434627Z" level=info msg="RemovePodSandbox for \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\"" Apr 16 01:22:03.777052 containerd[1479]: time="2026-04-16T01:22:03.774467844Z" level=info msg="Forcibly stopping sandbox \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\"" Apr 16 01:22:04.125939 containerd[1479]: 2026-04-16 01:22:03.984 [WARNING][6133] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--kv5bx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"925d015b-3f96-4cb8-a9c1-64b9f1a67c52", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efd398c196ff8eb6a1058717556b877cd4892b716fe6961ba746780c1f2fbbe8", Pod:"coredns-7d764666f9-kv5bx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali746823396a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:22:04.125939 containerd[1479]: 2026-04-16 01:22:03.985 [INFO][6133] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Apr 16 01:22:04.125939 containerd[1479]: 2026-04-16 01:22:03.985 [INFO][6133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" iface="eth0" netns="" Apr 16 01:22:04.125939 containerd[1479]: 2026-04-16 01:22:03.985 [INFO][6133] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Apr 16 01:22:04.125939 containerd[1479]: 2026-04-16 01:22:03.985 [INFO][6133] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Apr 16 01:22:04.125939 containerd[1479]: 2026-04-16 01:22:04.080 [INFO][6158] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" HandleID="k8s-pod-network.464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Workload="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:22:04.125939 containerd[1479]: 2026-04-16 01:22:04.081 [INFO][6158] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:22:04.125939 containerd[1479]: 2026-04-16 01:22:04.081 [INFO][6158] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:22:04.125939 containerd[1479]: 2026-04-16 01:22:04.109 [WARNING][6158] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" HandleID="k8s-pod-network.464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Workload="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:22:04.125939 containerd[1479]: 2026-04-16 01:22:04.109 [INFO][6158] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" HandleID="k8s-pod-network.464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Workload="localhost-k8s-coredns--7d764666f9--kv5bx-eth0" Apr 16 01:22:04.125939 containerd[1479]: 2026-04-16 01:22:04.116 [INFO][6158] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:22:04.125939 containerd[1479]: 2026-04-16 01:22:04.121 [INFO][6133] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64" Apr 16 01:22:04.127095 containerd[1479]: time="2026-04-16T01:22:04.126187574Z" level=info msg="TearDown network for sandbox \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\" successfully" Apr 16 01:22:04.141413 containerd[1479]: time="2026-04-16T01:22:04.140881926Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:22:04.141413 containerd[1479]: time="2026-04-16T01:22:04.141075813Z" level=info msg="RemovePodSandbox \"464e17545cf496fb669888aabb34a1a43a7a42ae4211455463950faf7373af64\" returns successfully" Apr 16 01:22:04.142102 containerd[1479]: time="2026-04-16T01:22:04.141882045Z" level=info msg="StopPodSandbox for \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\"" Apr 16 01:22:04.519223 containerd[1479]: 2026-04-16 01:22:04.391 [WARNING][6176] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--ss6nt-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9471422e-1a11-4a68-a210-42dc7d4df58a", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73", Pod:"coredns-7d764666f9-ss6nt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d711210f9e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:22:04.519223 containerd[1479]: 2026-04-16 01:22:04.394 [INFO][6176] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Apr 16 01:22:04.519223 containerd[1479]: 2026-04-16 01:22:04.394 [INFO][6176] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" iface="eth0" netns="" Apr 16 01:22:04.519223 containerd[1479]: 2026-04-16 01:22:04.394 [INFO][6176] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Apr 16 01:22:04.519223 containerd[1479]: 2026-04-16 01:22:04.394 [INFO][6176] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Apr 16 01:22:04.519223 containerd[1479]: 2026-04-16 01:22:04.489 [INFO][6185] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" HandleID="k8s-pod-network.bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Workload="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:22:04.519223 containerd[1479]: 2026-04-16 01:22:04.490 [INFO][6185] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:22:04.519223 containerd[1479]: 2026-04-16 01:22:04.490 [INFO][6185] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:22:04.519223 containerd[1479]: 2026-04-16 01:22:04.507 [WARNING][6185] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" HandleID="k8s-pod-network.bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Workload="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:22:04.519223 containerd[1479]: 2026-04-16 01:22:04.507 [INFO][6185] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" HandleID="k8s-pod-network.bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Workload="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:22:04.519223 containerd[1479]: 2026-04-16 01:22:04.512 [INFO][6185] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:22:04.519223 containerd[1479]: 2026-04-16 01:22:04.516 [INFO][6176] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Apr 16 01:22:04.519223 containerd[1479]: time="2026-04-16T01:22:04.519127792Z" level=info msg="TearDown network for sandbox \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\" successfully" Apr 16 01:22:04.519223 containerd[1479]: time="2026-04-16T01:22:04.519146337Z" level=info msg="StopPodSandbox for \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\" returns successfully" Apr 16 01:22:04.526146 containerd[1479]: time="2026-04-16T01:22:04.522630797Z" level=info msg="RemovePodSandbox for \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\"" Apr 16 01:22:04.526146 containerd[1479]: time="2026-04-16T01:22:04.522861349Z" level=info msg="Forcibly stopping sandbox \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\"" Apr 16 01:22:04.873063 containerd[1479]: 2026-04-16 01:22:04.647 [WARNING][6203] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--ss6nt-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9471422e-1a11-4a68-a210-42dc7d4df58a", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 1, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63516408d49bad3e9914f641b69131f65df7b03476220a38fca7dc4767f65a73", Pod:"coredns-7d764666f9-ss6nt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d711210f9e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 01:22:04.873063 containerd[1479]: 2026-04-16 01:22:04.648 [INFO][6203] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Apr 16 01:22:04.873063 containerd[1479]: 2026-04-16 01:22:04.648 [INFO][6203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" iface="eth0" netns="" Apr 16 01:22:04.873063 containerd[1479]: 2026-04-16 01:22:04.648 [INFO][6203] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Apr 16 01:22:04.873063 containerd[1479]: 2026-04-16 01:22:04.648 [INFO][6203] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Apr 16 01:22:04.873063 containerd[1479]: 2026-04-16 01:22:04.764 [INFO][6212] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" HandleID="k8s-pod-network.bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Workload="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:22:04.873063 containerd[1479]: 2026-04-16 01:22:04.817 [INFO][6212] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:22:04.873063 containerd[1479]: 2026-04-16 01:22:04.817 [INFO][6212] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:22:04.873063 containerd[1479]: 2026-04-16 01:22:04.846 [WARNING][6212] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" HandleID="k8s-pod-network.bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Workload="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:22:04.873063 containerd[1479]: 2026-04-16 01:22:04.846 [INFO][6212] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" HandleID="k8s-pod-network.bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Workload="localhost-k8s-coredns--7d764666f9--ss6nt-eth0" Apr 16 01:22:04.873063 containerd[1479]: 2026-04-16 01:22:04.855 [INFO][6212] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:22:04.873063 containerd[1479]: 2026-04-16 01:22:04.859 [INFO][6203] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642" Apr 16 01:22:04.873063 containerd[1479]: time="2026-04-16T01:22:04.871295872Z" level=info msg="TearDown network for sandbox \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\" successfully" Apr 16 01:22:04.886232 containerd[1479]: time="2026-04-16T01:22:04.886061317Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:22:04.886232 containerd[1479]: time="2026-04-16T01:22:04.886255571Z" level=info msg="RemovePodSandbox \"bb93fe724c151bea4284b0981e0457adce75bc215496568b9214d3839e19a642\" returns successfully" Apr 16 01:22:04.888268 containerd[1479]: time="2026-04-16T01:22:04.887197359Z" level=info msg="StopPodSandbox for \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\"" Apr 16 01:22:05.128402 containerd[1479]: 2026-04-16 01:22:05.042 [WARNING][6230] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" WorkloadEndpoint="localhost-k8s-whisker--7cdb99654c--h52cj-eth0" Apr 16 01:22:05.128402 containerd[1479]: 2026-04-16 01:22:05.043 [INFO][6230] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Apr 16 01:22:05.128402 containerd[1479]: 2026-04-16 01:22:05.043 [INFO][6230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" iface="eth0" netns="" Apr 16 01:22:05.128402 containerd[1479]: 2026-04-16 01:22:05.043 [INFO][6230] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Apr 16 01:22:05.128402 containerd[1479]: 2026-04-16 01:22:05.043 [INFO][6230] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Apr 16 01:22:05.128402 containerd[1479]: 2026-04-16 01:22:05.094 [INFO][6239] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" HandleID="k8s-pod-network.21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Workload="localhost-k8s-whisker--7cdb99654c--h52cj-eth0" Apr 16 01:22:05.128402 containerd[1479]: 2026-04-16 01:22:05.094 [INFO][6239] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:22:05.128402 containerd[1479]: 2026-04-16 01:22:05.094 [INFO][6239] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:22:05.128402 containerd[1479]: 2026-04-16 01:22:05.107 [WARNING][6239] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" HandleID="k8s-pod-network.21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Workload="localhost-k8s-whisker--7cdb99654c--h52cj-eth0" Apr 16 01:22:05.128402 containerd[1479]: 2026-04-16 01:22:05.107 [INFO][6239] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" HandleID="k8s-pod-network.21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Workload="localhost-k8s-whisker--7cdb99654c--h52cj-eth0" Apr 16 01:22:05.128402 containerd[1479]: 2026-04-16 01:22:05.114 [INFO][6239] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:22:05.128402 containerd[1479]: 2026-04-16 01:22:05.120 [INFO][6230] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Apr 16 01:22:05.128402 containerd[1479]: time="2026-04-16T01:22:05.128110875Z" level=info msg="TearDown network for sandbox \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\" successfully" Apr 16 01:22:05.128402 containerd[1479]: time="2026-04-16T01:22:05.128138384Z" level=info msg="StopPodSandbox for \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\" returns successfully" Apr 16 01:22:05.137419 containerd[1479]: time="2026-04-16T01:22:05.136443188Z" level=info msg="RemovePodSandbox for \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\"" Apr 16 01:22:05.137419 containerd[1479]: time="2026-04-16T01:22:05.136488505Z" level=info msg="Forcibly stopping sandbox \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\"" Apr 16 01:22:05.164963 systemd[1]: run-containerd-runc-k8s.io-d5e02013b3c46f1e812134a7b7a4843b0ad370c9fa7e8a9681876e6ad9ab5c7a-runc.pifbqu.mount: Deactivated successfully. Apr 16 01:22:05.566153 containerd[1479]: 2026-04-16 01:22:05.354 [WARNING][6265] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" WorkloadEndpoint="localhost-k8s-whisker--7cdb99654c--h52cj-eth0" Apr 16 01:22:05.566153 containerd[1479]: 2026-04-16 01:22:05.355 [INFO][6265] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Apr 16 01:22:05.566153 containerd[1479]: 2026-04-16 01:22:05.355 [INFO][6265] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" iface="eth0" netns="" Apr 16 01:22:05.566153 containerd[1479]: 2026-04-16 01:22:05.356 [INFO][6265] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Apr 16 01:22:05.566153 containerd[1479]: 2026-04-16 01:22:05.356 [INFO][6265] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Apr 16 01:22:05.566153 containerd[1479]: 2026-04-16 01:22:05.539 [INFO][6287] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" HandleID="k8s-pod-network.21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Workload="localhost-k8s-whisker--7cdb99654c--h52cj-eth0" Apr 16 01:22:05.566153 containerd[1479]: 2026-04-16 01:22:05.540 [INFO][6287] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 01:22:05.566153 containerd[1479]: 2026-04-16 01:22:05.540 [INFO][6287] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 01:22:05.566153 containerd[1479]: 2026-04-16 01:22:05.555 [WARNING][6287] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" HandleID="k8s-pod-network.21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Workload="localhost-k8s-whisker--7cdb99654c--h52cj-eth0" Apr 16 01:22:05.566153 containerd[1479]: 2026-04-16 01:22:05.555 [INFO][6287] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" HandleID="k8s-pod-network.21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Workload="localhost-k8s-whisker--7cdb99654c--h52cj-eth0" Apr 16 01:22:05.566153 containerd[1479]: 2026-04-16 01:22:05.559 [INFO][6287] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 01:22:05.566153 containerd[1479]: 2026-04-16 01:22:05.563 [INFO][6265] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c" Apr 16 01:22:05.566153 containerd[1479]: time="2026-04-16T01:22:05.565953808Z" level=info msg="TearDown network for sandbox \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\" successfully" Apr 16 01:22:05.584731 containerd[1479]: time="2026-04-16T01:22:05.584329518Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 16 01:22:05.587411 containerd[1479]: time="2026-04-16T01:22:05.584899226Z" level=info msg="RemovePodSandbox \"21724c0444c3fa8f853d78fa70a0c51b65afd1017ab2d62e583920b630791d2c\" returns successfully" Apr 16 01:22:10.972359 systemd[1]: run-containerd-runc-k8s.io-2fb7a90a7d3892c694c77b2c85fd0450372c54fa7f4c56bffedc6bca89c85d93-runc.VTkiWp.mount: Deactivated successfully. Apr 16 01:22:27.731540 kubelet[2548]: E0416 01:22:27.729171 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:22:29.729474 kubelet[2548]: E0416 01:22:29.729264 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:22:34.567281 systemd[1]: Started sshd@9-10.0.0.84:22-10.0.0.1:40710.service - OpenSSH per-connection server daemon (10.0.0.1:40710). Apr 16 01:22:35.100520 sshd[6348]: Accepted publickey for core from 10.0.0.1 port 40710 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:22:35.116986 sshd[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:22:35.141421 systemd-logind[1454]: New session 10 of user core. Apr 16 01:22:35.150342 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 01:22:37.311929 sshd[6348]: pam_unix(sshd:session): session closed for user core Apr 16 01:22:37.325119 systemd[1]: sshd@9-10.0.0.84:22-10.0.0.1:40710.service: Deactivated successfully. Apr 16 01:22:37.331830 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 01:22:37.332839 systemd[1]: session-10.scope: Consumed 1.063s CPU time. Apr 16 01:22:37.336056 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Apr 16 01:22:37.339837 systemd-logind[1454]: Removed session 10. Apr 16 01:22:39.731396 kubelet[2548]: E0416 01:22:39.729025 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:22:42.358652 systemd[1]: Started sshd@10-10.0.0.84:22-10.0.0.1:58412.service - OpenSSH per-connection server daemon (10.0.0.1:58412). Apr 16 01:22:42.540145 sshd[6427]: Accepted publickey for core from 10.0.0.1 port 58412 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:22:42.545209 sshd[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:22:42.609980 systemd-logind[1454]: New session 11 of user core. Apr 16 01:22:42.632067 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 01:22:43.379359 sshd[6427]: pam_unix(sshd:session): session closed for user core Apr 16 01:22:43.387163 systemd[1]: sshd@10-10.0.0.84:22-10.0.0.1:58412.service: Deactivated successfully. Apr 16 01:22:43.401254 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 01:22:43.409302 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Apr 16 01:22:43.419073 systemd-logind[1454]: Removed session 11. Apr 16 01:22:48.177480 update_engine[1462]: I20260416 01:22:48.176612 1462 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 16 01:22:48.178347 update_engine[1462]: I20260416 01:22:48.178241 1462 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 16 01:22:48.194189 update_engine[1462]: I20260416 01:22:48.193147 1462 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 16 01:22:48.196473 update_engine[1462]: I20260416 01:22:48.196314 1462 omaha_request_params.cc:62] Current group set to lts Apr 16 01:22:48.197188 update_engine[1462]: I20260416 01:22:48.196617 1462 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 16 01:22:48.197188 update_engine[1462]: I20260416 01:22:48.196630 1462 update_attempter.cc:643] Scheduling an action processor start. Apr 16 01:22:48.197188 update_engine[1462]: I20260416 01:22:48.196651 1462 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 01:22:48.197188 update_engine[1462]: I20260416 01:22:48.196933 1462 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 16 01:22:48.197188 update_engine[1462]: I20260416 01:22:48.196995 1462 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 01:22:48.197188 update_engine[1462]: I20260416 01:22:48.197000 1462 omaha_request_action.cc:272] Request: Apr 16 01:22:48.197188 update_engine[1462]: Apr 16 01:22:48.197188 update_engine[1462]: Apr 16 01:22:48.197188 update_engine[1462]: Apr 16 01:22:48.197188 update_engine[1462]: Apr 16 01:22:48.197188 update_engine[1462]: Apr 16 01:22:48.197188 update_engine[1462]: Apr 16 01:22:48.197188 update_engine[1462]: Apr 16 01:22:48.197188 update_engine[1462]: Apr 16 01:22:48.197188 update_engine[1462]: I20260416 01:22:48.197006 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 01:22:48.217572 update_engine[1462]: I20260416 01:22:48.214178 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 01:22:48.217572 update_engine[1462]: I20260416 01:22:48.214954 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 01:22:48.227560 update_engine[1462]: E20260416 01:22:48.227283 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 01:22:48.227560 update_engine[1462]: I20260416 01:22:48.227530 1462 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 16 01:22:48.230985 locksmithd[1484]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 16 01:22:48.424999 systemd[1]: Started sshd@11-10.0.0.84:22-10.0.0.1:58422.service - OpenSSH per-connection server daemon (10.0.0.1:58422). Apr 16 01:22:48.726199 sshd[6476]: Accepted publickey for core from 10.0.0.1 port 58422 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:22:48.735566 sshd[6476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:22:48.824469 systemd-logind[1454]: New session 12 of user core. Apr 16 01:22:48.921346 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 01:22:50.509450 sshd[6476]: pam_unix(sshd:session): session closed for user core Apr 16 01:22:50.597287 systemd[1]: sshd@11-10.0.0.84:22-10.0.0.1:58422.service: Deactivated successfully. Apr 16 01:22:50.623606 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 01:22:50.629659 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Apr 16 01:22:50.648189 systemd-logind[1454]: Removed session 12. Apr 16 01:22:54.729637 kubelet[2548]: E0416 01:22:54.729440 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:22:54.729637 kubelet[2548]: E0416 01:22:54.729621 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:22:55.549182 systemd[1]: Started sshd@12-10.0.0.84:22-10.0.0.1:49230.service - OpenSSH per-connection server daemon (10.0.0.1:49230). Apr 16 01:22:55.796240 sshd[6492]: Accepted publickey for core from 10.0.0.1 port 49230 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:22:55.805262 sshd[6492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:22:55.834289 systemd-logind[1454]: New session 13 of user core. Apr 16 01:22:55.855971 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 01:22:56.486434 sshd[6492]: pam_unix(sshd:session): session closed for user core Apr 16 01:22:56.503551 systemd[1]: sshd@12-10.0.0.84:22-10.0.0.1:49230.service: Deactivated successfully. Apr 16 01:22:56.531163 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 01:22:56.537117 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Apr 16 01:22:56.552366 systemd-logind[1454]: Removed session 13. Apr 16 01:22:56.735513 kubelet[2548]: E0416 01:22:56.734981 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:22:58.924883 update_engine[1462]: I20260416 01:22:58.922637 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 01:22:58.926081 update_engine[1462]: I20260416 01:22:58.925658 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 01:22:58.927137 update_engine[1462]: I20260416 01:22:58.926630 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 01:22:58.943911 update_engine[1462]: E20260416 01:22:58.942498 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 01:22:58.943911 update_engine[1462]: I20260416 01:22:58.943121 1462 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 16 01:23:01.565060 systemd[1]: Started sshd@13-10.0.0.84:22-10.0.0.1:48626.service - OpenSSH per-connection server daemon (10.0.0.1:48626). Apr 16 01:23:01.750301 kubelet[2548]: E0416 01:23:01.746479 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:23:01.981479 sshd[6509]: Accepted publickey for core from 10.0.0.1 port 48626 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:23:01.991178 sshd[6509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:23:02.026300 systemd-logind[1454]: New session 14 of user core. Apr 16 01:23:02.041944 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 01:23:02.335429 systemd[1]: run-containerd-runc-k8s.io-407adcfeb85f2ea58cc3cb0afdb7fe6b6fc4ac3074e5ae01309abf26a5edf351-runc.FXjxBC.mount: Deactivated successfully. Apr 16 01:23:03.247280 sshd[6509]: pam_unix(sshd:session): session closed for user core Apr 16 01:23:03.326611 systemd[1]: sshd@13-10.0.0.84:22-10.0.0.1:48626.service: Deactivated successfully. Apr 16 01:23:03.348332 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 01:23:03.356542 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Apr 16 01:23:03.376192 systemd-logind[1454]: Removed session 14. Apr 16 01:23:03.942968 systemd[1]: run-containerd-runc-k8s.io-407adcfeb85f2ea58cc3cb0afdb7fe6b6fc4ac3074e5ae01309abf26a5edf351-runc.qiS2JI.mount: Deactivated successfully. Apr 16 01:23:08.303274 systemd[1]: Started sshd@14-10.0.0.84:22-10.0.0.1:48638.service - OpenSSH per-connection server daemon (10.0.0.1:48638). Apr 16 01:23:08.531364 sshd[6601]: Accepted publickey for core from 10.0.0.1 port 48638 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:23:08.536932 sshd[6601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:23:08.559625 systemd-logind[1454]: New session 15 of user core. Apr 16 01:23:08.574227 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 01:23:08.912605 update_engine[1462]: I20260416 01:23:08.908999 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 01:23:08.914027 update_engine[1462]: I20260416 01:23:08.913458 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 01:23:08.914027 update_engine[1462]: I20260416 01:23:08.913990 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 01:23:08.921475 update_engine[1462]: E20260416 01:23:08.921402 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 01:23:08.921475 update_engine[1462]: I20260416 01:23:08.921456 1462 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 16 01:23:09.060989 sshd[6601]: pam_unix(sshd:session): session closed for user core Apr 16 01:23:09.070204 systemd[1]: sshd@14-10.0.0.84:22-10.0.0.1:48638.service: Deactivated successfully. Apr 16 01:23:09.090470 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 01:23:09.095285 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Apr 16 01:23:09.103310 systemd-logind[1454]: Removed session 15. Apr 16 01:23:14.102592 systemd[1]: Started sshd@15-10.0.0.84:22-10.0.0.1:57698.service - OpenSSH per-connection server daemon (10.0.0.1:57698). Apr 16 01:23:14.156792 sshd[6641]: Accepted publickey for core from 10.0.0.1 port 57698 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:23:14.160539 sshd[6641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:23:14.169885 systemd-logind[1454]: New session 16 of user core. Apr 16 01:23:14.182657 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 01:23:14.738602 sshd[6641]: pam_unix(sshd:session): session closed for user core Apr 16 01:23:14.753774 systemd[1]: sshd@15-10.0.0.84:22-10.0.0.1:57698.service: Deactivated successfully. Apr 16 01:23:14.775840 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 01:23:14.780572 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Apr 16 01:23:14.788345 systemd-logind[1454]: Removed session 16. Apr 16 01:23:18.909492 update_engine[1462]: I20260416 01:23:18.908996 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 01:23:18.911381 update_engine[1462]: I20260416 01:23:18.909947 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 01:23:18.911618 update_engine[1462]: I20260416 01:23:18.911444 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 01:23:18.921575 update_engine[1462]: E20260416 01:23:18.921258 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 01:23:18.921575 update_engine[1462]: I20260416 01:23:18.921586 1462 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 01:23:18.929797 update_engine[1462]: I20260416 01:23:18.929298 1462 omaha_request_action.cc:617] Omaha request response: Apr 16 01:23:18.930269 update_engine[1462]: E20260416 01:23:18.930047 1462 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 16 01:23:18.932848 update_engine[1462]: I20260416 01:23:18.932096 1462 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 16 01:23:18.932848 update_engine[1462]: I20260416 01:23:18.932507 1462 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 01:23:18.932848 update_engine[1462]: I20260416 01:23:18.932793 1462 update_attempter.cc:306] Processing Done. Apr 16 01:23:18.932848 update_engine[1462]: E20260416 01:23:18.932825 1462 update_attempter.cc:619] Update failed. Apr 16 01:23:18.932848 update_engine[1462]: I20260416 01:23:18.932837 1462 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 16 01:23:18.932848 update_engine[1462]: I20260416 01:23:18.932842 1462 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 16 01:23:18.932848 update_engine[1462]: I20260416 01:23:18.932850 1462 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 16 01:23:18.934537 update_engine[1462]: I20260416 01:23:18.932951 1462 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 01:23:18.934537 update_engine[1462]: I20260416 01:23:18.933002 1462 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 01:23:18.934537 update_engine[1462]: I20260416 01:23:18.933007 1462 omaha_request_action.cc:272] Request: Apr 16 01:23:18.934537 update_engine[1462]: Apr 16 01:23:18.934537 update_engine[1462]: Apr 16 01:23:18.934537 update_engine[1462]: Apr 16 01:23:18.934537 update_engine[1462]: Apr 16 01:23:18.934537 update_engine[1462]: Apr 16 01:23:18.934537 update_engine[1462]: Apr 16 01:23:18.934537 update_engine[1462]: I20260416 01:23:18.933013 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 01:23:18.934537 update_engine[1462]: I20260416 01:23:18.933315 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 01:23:18.934537 update_engine[1462]: I20260416 01:23:18.933621 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 01:23:18.934961 locksmithd[1484]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 16 01:23:18.942629 update_engine[1462]: E20260416 01:23:18.942314 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 01:23:18.942629 update_engine[1462]: I20260416 01:23:18.942610 1462 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 01:23:18.942629 update_engine[1462]: I20260416 01:23:18.942623 1462 omaha_request_action.cc:617] Omaha request response: Apr 16 01:23:18.942629 update_engine[1462]: I20260416 01:23:18.942632 1462 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 01:23:18.942629 update_engine[1462]: I20260416 01:23:18.942636 1462 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 01:23:18.942629 update_engine[1462]: I20260416 01:23:18.942641 1462 update_attempter.cc:306] Processing Done. Apr 16 01:23:18.942629 update_engine[1462]: I20260416 01:23:18.942647 1462 update_attempter.cc:310] Error event sent. Apr 16 01:23:18.942629 update_engine[1462]: I20260416 01:23:18.942659 1462 update_check_scheduler.cc:74] Next update check in 46m4s Apr 16 01:23:18.945122 locksmithd[1484]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 16 01:23:19.795296 systemd[1]: Started sshd@16-10.0.0.84:22-10.0.0.1:46914.service - OpenSSH per-connection server daemon (10.0.0.1:46914). Apr 16 01:23:19.849291 sshd[6656]: Accepted publickey for core from 10.0.0.1 port 46914 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:23:19.851504 sshd[6656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:23:19.863476 systemd-logind[1454]: New session 17 of user core. Apr 16 01:23:19.869008 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 01:23:20.611809 sshd[6656]: pam_unix(sshd:session): session closed for user core Apr 16 01:23:20.655171 systemd[1]: sshd@16-10.0.0.84:22-10.0.0.1:46914.service: Deactivated successfully. Apr 16 01:23:20.733980 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 01:23:20.735488 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Apr 16 01:23:20.745560 systemd-logind[1454]: Removed session 17. Apr 16 01:23:25.650023 systemd[1]: Started sshd@17-10.0.0.84:22-10.0.0.1:46926.service - OpenSSH per-connection server daemon (10.0.0.1:46926). Apr 16 01:23:25.890416 sshd[6671]: Accepted publickey for core from 10.0.0.1 port 46926 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:23:25.894450 sshd[6671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:23:25.910518 systemd-logind[1454]: New session 18 of user core. Apr 16 01:23:25.918965 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 01:23:26.364000 sshd[6671]: pam_unix(sshd:session): session closed for user core Apr 16 01:23:26.368091 systemd[1]: sshd@17-10.0.0.84:22-10.0.0.1:46926.service: Deactivated successfully. Apr 16 01:23:26.374396 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 01:23:26.375663 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Apr 16 01:23:26.377465 systemd-logind[1454]: Removed session 18. Apr 16 01:23:31.399847 systemd[1]: Started sshd@18-10.0.0.84:22-10.0.0.1:51250.service - OpenSSH per-connection server daemon (10.0.0.1:51250). Apr 16 01:23:31.512112 sshd[6686]: Accepted publickey for core from 10.0.0.1 port 51250 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:23:31.520010 sshd[6686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:23:31.531500 systemd-logind[1454]: New session 19 of user core. Apr 16 01:23:31.549657 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 01:23:31.928630 sshd[6686]: pam_unix(sshd:session): session closed for user core Apr 16 01:23:31.952515 systemd[1]: sshd@18-10.0.0.84:22-10.0.0.1:51250.service: Deactivated successfully. Apr 16 01:23:31.960922 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 01:23:32.004519 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Apr 16 01:23:32.011934 systemd-logind[1454]: Removed session 19. Apr 16 01:23:36.988103 systemd[1]: Started sshd@19-10.0.0.84:22-10.0.0.1:51262.service - OpenSSH per-connection server daemon (10.0.0.1:51262). Apr 16 01:23:37.306050 sshd[6744]: Accepted publickey for core from 10.0.0.1 port 51262 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:23:37.311915 sshd[6744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:23:37.399306 systemd-logind[1454]: New session 20 of user core. Apr 16 01:23:37.428982 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 01:23:38.444854 sshd[6744]: pam_unix(sshd:session): session closed for user core Apr 16 01:23:38.464546 systemd[1]: sshd@19-10.0.0.84:22-10.0.0.1:51262.service: Deactivated successfully. Apr 16 01:23:38.489622 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 01:23:38.498817 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Apr 16 01:23:38.511765 systemd-logind[1454]: Removed session 20. Apr 16 01:23:39.729057 kubelet[2548]: E0416 01:23:39.728350 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:23:43.417997 systemd[1]: Started sshd@20-10.0.0.84:22-10.0.0.1:44136.service - OpenSSH per-connection server daemon (10.0.0.1:44136). Apr 16 01:23:43.577882 sshd[6783]: Accepted publickey for core from 10.0.0.1 port 44136 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:23:43.580171 sshd[6783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:23:43.594290 systemd-logind[1454]: New session 21 of user core. Apr 16 01:23:43.606143 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 01:23:44.646001 sshd[6783]: pam_unix(sshd:session): session closed for user core Apr 16 01:23:44.669161 systemd[1]: sshd@20-10.0.0.84:22-10.0.0.1:44136.service: Deactivated successfully. Apr 16 01:23:44.697081 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 01:23:44.719336 systemd-logind[1454]: Session 21 logged out. Waiting for processes to exit. Apr 16 01:23:44.722927 systemd-logind[1454]: Removed session 21. Apr 16 01:23:48.734271 kubelet[2548]: E0416 01:23:48.733003 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:23:49.748841 systemd[1]: Started sshd@21-10.0.0.84:22-10.0.0.1:59860.service - OpenSSH per-connection server daemon (10.0.0.1:59860). Apr 16 01:23:49.863434 sshd[6799]: Accepted publickey for core from 10.0.0.1 port 59860 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:23:49.921487 sshd[6799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:23:49.940969 systemd-logind[1454]: New session 22 of user core. Apr 16 01:23:49.961375 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 01:23:50.664513 sshd[6799]: pam_unix(sshd:session): session closed for user core Apr 16 01:23:50.679271 systemd[1]: sshd@21-10.0.0.84:22-10.0.0.1:59860.service: Deactivated successfully. Apr 16 01:23:50.688912 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 01:23:50.693260 systemd-logind[1454]: Session 22 logged out. Waiting for processes to exit. Apr 16 01:23:50.699380 systemd-logind[1454]: Removed session 22. Apr 16 01:23:55.732369 kubelet[2548]: E0416 01:23:55.732332 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:23:55.780188 systemd[1]: Started sshd@22-10.0.0.84:22-10.0.0.1:59874.service - OpenSSH per-connection server daemon (10.0.0.1:59874). Apr 16 01:23:56.014621 sshd[6824]: Accepted publickey for core from 10.0.0.1 port 59874 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:23:56.018447 sshd[6824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:23:56.033471 systemd-logind[1454]: New session 23 of user core. Apr 16 01:23:56.062897 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 16 01:23:56.605154 sshd[6824]: pam_unix(sshd:session): session closed for user core Apr 16 01:23:56.615602 systemd[1]: sshd@22-10.0.0.84:22-10.0.0.1:59874.service: Deactivated successfully. Apr 16 01:23:56.620087 systemd[1]: session-23.scope: Deactivated successfully. Apr 16 01:23:56.622357 systemd-logind[1454]: Session 23 logged out. Waiting for processes to exit. Apr 16 01:23:56.628345 systemd-logind[1454]: Removed session 23. Apr 16 01:23:56.822283 kubelet[2548]: E0416 01:23:56.822155 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:23:57.729572 kubelet[2548]: E0416 01:23:57.729407 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:24:01.637888 systemd[1]: Started sshd@23-10.0.0.84:22-10.0.0.1:37218.service - OpenSSH per-connection server daemon (10.0.0.1:37218). Apr 16 01:24:01.697441 sshd[6849]: Accepted publickey for core from 10.0.0.1 port 37218 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:24:01.700939 sshd[6849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:24:01.711346 systemd-logind[1454]: New session 24 of user core. Apr 16 01:24:01.724140 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 16 01:24:02.029281 sshd[6849]: pam_unix(sshd:session): session closed for user core Apr 16 01:24:02.042310 systemd[1]: sshd@23-10.0.0.84:22-10.0.0.1:37218.service: Deactivated successfully. Apr 16 01:24:02.051120 systemd[1]: session-24.scope: Deactivated successfully. Apr 16 01:24:02.053293 systemd-logind[1454]: Session 24 logged out. Waiting for processes to exit. Apr 16 01:24:02.061325 systemd-logind[1454]: Removed session 24. Apr 16 01:24:02.220461 systemd[1]: run-containerd-runc-k8s.io-2fb7a90a7d3892c694c77b2c85fd0450372c54fa7f4c56bffedc6bca89c85d93-runc.CewLM7.mount: Deactivated successfully. Apr 16 01:24:07.065613 systemd[1]: Started sshd@24-10.0.0.84:22-10.0.0.1:37226.service - OpenSSH per-connection server daemon (10.0.0.1:37226). Apr 16 01:24:07.128218 sshd[6948]: Accepted publickey for core from 10.0.0.1 port 37226 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:24:07.130031 sshd[6948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:24:07.162211 systemd-logind[1454]: New session 25 of user core. Apr 16 01:24:07.231477 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 16 01:24:07.484401 sshd[6948]: pam_unix(sshd:session): session closed for user core Apr 16 01:24:07.491390 systemd[1]: sshd@24-10.0.0.84:22-10.0.0.1:37226.service: Deactivated successfully. Apr 16 01:24:07.496189 systemd[1]: session-25.scope: Deactivated successfully. Apr 16 01:24:07.499086 systemd-logind[1454]: Session 25 logged out. Waiting for processes to exit. Apr 16 01:24:07.501915 systemd-logind[1454]: Removed session 25. Apr 16 01:24:12.579227 systemd[1]: Started sshd@25-10.0.0.84:22-10.0.0.1:51356.service - OpenSSH per-connection server daemon (10.0.0.1:51356). Apr 16 01:24:12.764416 sshd[6988]: Accepted publickey for core from 10.0.0.1 port 51356 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:24:12.792990 sshd[6988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:24:12.815445 systemd-logind[1454]: New session 26 of user core. Apr 16 01:24:12.832245 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 16 01:24:13.419534 sshd[6988]: pam_unix(sshd:session): session closed for user core Apr 16 01:24:13.430603 systemd[1]: sshd@25-10.0.0.84:22-10.0.0.1:51356.service: Deactivated successfully. Apr 16 01:24:13.436834 systemd[1]: session-26.scope: Deactivated successfully. Apr 16 01:24:13.438348 systemd-logind[1454]: Session 26 logged out. Waiting for processes to exit. Apr 16 01:24:13.448402 systemd-logind[1454]: Removed session 26. Apr 16 01:24:16.735323 kubelet[2548]: E0416 01:24:16.735205 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:24:18.543911 systemd[1]: Started sshd@26-10.0.0.84:22-10.0.0.1:51360.service - OpenSSH per-connection server daemon (10.0.0.1:51360). Apr 16 01:24:18.717878 sshd[7017]: Accepted publickey for core from 10.0.0.1 port 51360 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:24:18.722490 sshd[7017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:24:18.747324 systemd-logind[1454]: New session 27 of user core. Apr 16 01:24:18.762820 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 16 01:24:19.442459 sshd[7017]: pam_unix(sshd:session): session closed for user core Apr 16 01:24:19.502461 systemd[1]: sshd@26-10.0.0.84:22-10.0.0.1:51360.service: Deactivated successfully. Apr 16 01:24:19.514500 systemd[1]: session-27.scope: Deactivated successfully. Apr 16 01:24:19.518943 systemd-logind[1454]: Session 27 logged out. Waiting for processes to exit. Apr 16 01:24:19.528526 systemd-logind[1454]: Removed session 27. Apr 16 01:24:19.734489 kubelet[2548]: E0416 01:24:19.729855 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:24:24.643013 systemd[1]: Started sshd@27-10.0.0.84:22-10.0.0.1:47984.service - OpenSSH per-connection server daemon (10.0.0.1:47984). Apr 16 01:24:25.081502 sshd[7055]: Accepted publickey for core from 10.0.0.1 port 47984 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:24:25.096537 sshd[7055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:24:25.235898 systemd-logind[1454]: New session 28 of user core. Apr 16 01:24:25.250558 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 16 01:24:26.275832 sshd[7055]: pam_unix(sshd:session): session closed for user core Apr 16 01:24:26.300090 systemd[1]: sshd@27-10.0.0.84:22-10.0.0.1:47984.service: Deactivated successfully. Apr 16 01:24:26.304447 systemd-logind[1454]: Session 28 logged out. Waiting for processes to exit. Apr 16 01:24:26.317067 systemd[1]: session-28.scope: Deactivated successfully. Apr 16 01:24:26.365910 systemd-logind[1454]: Removed session 28. Apr 16 01:24:31.454643 systemd[1]: Started sshd@28-10.0.0.84:22-10.0.0.1:57776.service - OpenSSH per-connection server daemon (10.0.0.1:57776). Apr 16 01:24:31.711063 sshd[7076]: Accepted publickey for core from 10.0.0.1 port 57776 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:24:31.823260 sshd[7076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:24:31.932134 systemd-logind[1454]: New session 29 of user core. Apr 16 01:24:32.056516 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 16 01:24:33.456372 sshd[7076]: pam_unix(sshd:session): session closed for user core Apr 16 01:24:33.480854 systemd[1]: sshd@28-10.0.0.84:22-10.0.0.1:57776.service: Deactivated successfully. Apr 16 01:24:33.496649 systemd[1]: session-29.scope: Deactivated successfully. Apr 16 01:24:33.508409 systemd-logind[1454]: Session 29 logged out. Waiting for processes to exit. Apr 16 01:24:33.519934 systemd-logind[1454]: Removed session 29. Apr 16 01:24:38.624015 systemd[1]: Started sshd@29-10.0.0.84:22-10.0.0.1:57782.service - OpenSSH per-connection server daemon (10.0.0.1:57782). Apr 16 01:24:39.369333 sshd[7138]: Accepted publickey for core from 10.0.0.1 port 57782 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:24:39.414179 sshd[7138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:24:39.544046 systemd-logind[1454]: New session 30 of user core. Apr 16 01:24:39.579531 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 16 01:24:41.781995 sshd[7138]: pam_unix(sshd:session): session closed for user core Apr 16 01:24:41.947142 systemd[1]: sshd@29-10.0.0.84:22-10.0.0.1:57782.service: Deactivated successfully. Apr 16 01:24:42.015017 systemd[1]: session-30.scope: Deactivated successfully. Apr 16 01:24:42.021196 systemd[1]: session-30.scope: Consumed 1.205s CPU time. Apr 16 01:24:42.028843 systemd-logind[1454]: Session 30 logged out. Waiting for processes to exit. Apr 16 01:24:42.138457 systemd-logind[1454]: Removed session 30. Apr 16 01:24:47.030293 systemd[1]: Started sshd@30-10.0.0.84:22-10.0.0.1:54068.service - OpenSSH per-connection server daemon (10.0.0.1:54068). Apr 16 01:24:47.536379 sshd[7174]: Accepted publickey for core from 10.0.0.1 port 54068 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:24:47.562119 sshd[7174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:24:47.692140 systemd-logind[1454]: New session 31 of user core. Apr 16 01:24:47.744555 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 16 01:24:50.061998 sshd[7174]: pam_unix(sshd:session): session closed for user core Apr 16 01:24:50.120423 systemd-logind[1454]: Session 31 logged out. Waiting for processes to exit. Apr 16 01:24:50.123627 systemd[1]: sshd@30-10.0.0.84:22-10.0.0.1:54068.service: Deactivated successfully. Apr 16 01:24:50.246106 systemd[1]: session-31.scope: Deactivated successfully. Apr 16 01:24:50.246599 systemd[1]: session-31.scope: Consumed 1.370s CPU time. Apr 16 01:24:50.253240 systemd-logind[1454]: Removed session 31. Apr 16 01:24:50.745303 kubelet[2548]: E0416 01:24:50.743433 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:24:55.117080 systemd[1]: Started sshd@31-10.0.0.84:22-10.0.0.1:37696.service - OpenSSH per-connection server daemon (10.0.0.1:37696). Apr 16 01:24:55.435808 sshd[7194]: Accepted publickey for core from 10.0.0.1 port 37696 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:24:55.441973 sshd[7194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:24:55.520064 systemd-logind[1454]: New session 32 of user core. Apr 16 01:24:55.542945 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 16 01:24:55.788772 kubelet[2548]: E0416 01:24:55.786217 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:24:57.107492 sshd[7194]: pam_unix(sshd:session): session closed for user core Apr 16 01:24:57.153217 systemd-logind[1454]: Session 32 logged out. Waiting for processes to exit. Apr 16 01:24:57.153904 systemd[1]: sshd@31-10.0.0.84:22-10.0.0.1:37696.service: Deactivated successfully. Apr 16 01:24:57.160041 systemd[1]: session-32.scope: Deactivated successfully. Apr 16 01:24:57.214445 systemd-logind[1454]: Removed session 32. Apr 16 01:25:01.743894 kubelet[2548]: E0416 01:25:01.743209 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:25:02.227951 systemd[1]: Started sshd@32-10.0.0.84:22-10.0.0.1:58180.service - OpenSSH per-connection server daemon (10.0.0.1:58180). Apr 16 01:25:02.525307 sshd[7214]: Accepted publickey for core from 10.0.0.1 port 58180 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:25:02.528885 sshd[7214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:25:02.657479 systemd-logind[1454]: New session 33 of user core. Apr 16 01:25:02.680303 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 16 01:25:03.745935 sshd[7214]: pam_unix(sshd:session): session closed for user core Apr 16 01:25:03.843046 systemd[1]: sshd@32-10.0.0.84:22-10.0.0.1:58180.service: Deactivated successfully. Apr 16 01:25:03.878337 systemd[1]: session-33.scope: Deactivated successfully. Apr 16 01:25:03.893238 systemd-logind[1454]: Session 33 logged out. Waiting for processes to exit. Apr 16 01:25:03.913231 systemd-logind[1454]: Removed session 33. Apr 16 01:25:04.035496 systemd[1]: run-containerd-runc-k8s.io-407adcfeb85f2ea58cc3cb0afdb7fe6b6fc4ac3074e5ae01309abf26a5edf351-runc.oo5jG8.mount: Deactivated successfully. Apr 16 01:25:08.859300 systemd[1]: Started sshd@33-10.0.0.84:22-10.0.0.1:58196.service - OpenSSH per-connection server daemon (10.0.0.1:58196). Apr 16 01:25:09.203225 sshd[7317]: Accepted publickey for core from 10.0.0.1 port 58196 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:25:09.227986 sshd[7317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:25:09.347232 systemd-logind[1454]: New session 34 of user core. Apr 16 01:25:09.361075 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 16 01:25:09.735508 kubelet[2548]: E0416 01:25:09.735083 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:25:10.611649 sshd[7317]: pam_unix(sshd:session): session closed for user core Apr 16 01:25:10.626110 systemd[1]: sshd@33-10.0.0.84:22-10.0.0.1:58196.service: Deactivated successfully. Apr 16 01:25:10.632827 systemd[1]: session-34.scope: Deactivated successfully. Apr 16 01:25:10.633962 systemd-logind[1454]: Session 34 logged out. Waiting for processes to exit. Apr 16 01:25:10.638636 systemd-logind[1454]: Removed session 34. Apr 16 01:25:15.674490 systemd[1]: Started sshd@34-10.0.0.84:22-10.0.0.1:34396.service - OpenSSH per-connection server daemon (10.0.0.1:34396). Apr 16 01:25:15.837464 sshd[7354]: Accepted publickey for core from 10.0.0.1 port 34396 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:25:15.841236 sshd[7354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:25:15.890650 systemd-logind[1454]: New session 35 of user core. Apr 16 01:25:15.911521 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 16 01:25:16.366658 sshd[7354]: pam_unix(sshd:session): session closed for user core Apr 16 01:25:16.410855 systemd[1]: sshd@34-10.0.0.84:22-10.0.0.1:34396.service: Deactivated successfully. Apr 16 01:25:16.413666 systemd[1]: session-35.scope: Deactivated successfully. Apr 16 01:25:16.420541 systemd-logind[1454]: Session 35 logged out. Waiting for processes to exit. Apr 16 01:25:16.423101 systemd-logind[1454]: Removed session 35. Apr 16 01:25:19.741736 kubelet[2548]: E0416 01:25:19.741569 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:25:21.477986 systemd[1]: Started sshd@35-10.0.0.84:22-10.0.0.1:53718.service - OpenSSH per-connection server daemon (10.0.0.1:53718). Apr 16 01:25:21.707473 sshd[7369]: Accepted publickey for core from 10.0.0.1 port 53718 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:25:21.728202 sshd[7369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:25:21.756410 systemd-logind[1454]: New session 36 of user core. Apr 16 01:25:21.767202 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 16 01:25:22.495075 sshd[7369]: pam_unix(sshd:session): session closed for user core Apr 16 01:25:22.511660 systemd[1]: sshd@35-10.0.0.84:22-10.0.0.1:53718.service: Deactivated successfully. Apr 16 01:25:22.514511 systemd[1]: session-36.scope: Deactivated successfully. Apr 16 01:25:22.515448 systemd-logind[1454]: Session 36 logged out. Waiting for processes to exit. Apr 16 01:25:22.521106 systemd-logind[1454]: Removed session 36. Apr 16 01:25:24.740879 kubelet[2548]: E0416 01:25:24.739764 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:25:27.564346 systemd[1]: Started sshd@36-10.0.0.84:22-10.0.0.1:53720.service - OpenSSH per-connection server daemon (10.0.0.1:53720). Apr 16 01:25:27.895823 sshd[7384]: Accepted publickey for core from 10.0.0.1 port 53720 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:25:27.902542 sshd[7384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:25:28.009930 systemd-logind[1454]: New session 37 of user core. Apr 16 01:25:28.041523 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 16 01:25:29.056177 sshd[7384]: pam_unix(sshd:session): session closed for user core Apr 16 01:25:29.103647 systemd[1]: sshd@36-10.0.0.84:22-10.0.0.1:53720.service: Deactivated successfully. Apr 16 01:25:29.184107 systemd[1]: session-37.scope: Deactivated successfully. Apr 16 01:25:29.190325 systemd-logind[1454]: Session 37 logged out. Waiting for processes to exit. Apr 16 01:25:29.195429 systemd-logind[1454]: Removed session 37. Apr 16 01:25:32.728218 kubelet[2548]: E0416 01:25:32.727388 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:25:34.118367 systemd[1]: Started sshd@37-10.0.0.84:22-10.0.0.1:55016.service - OpenSSH per-connection server daemon (10.0.0.1:55016). Apr 16 01:25:34.473332 sshd[7419]: Accepted publickey for core from 10.0.0.1 port 55016 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:25:34.481406 sshd[7419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:25:34.500871 systemd-logind[1454]: New session 38 of user core. Apr 16 01:25:34.526515 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 16 01:25:35.272543 sshd[7419]: pam_unix(sshd:session): session closed for user core Apr 16 01:25:35.298606 systemd[1]: sshd@37-10.0.0.84:22-10.0.0.1:55016.service: Deactivated successfully. Apr 16 01:25:35.309162 systemd[1]: session-38.scope: Deactivated successfully. Apr 16 01:25:35.310620 systemd-logind[1454]: Session 38 logged out. Waiting for processes to exit. Apr 16 01:25:35.345119 systemd-logind[1454]: Removed session 38. Apr 16 01:25:40.344389 systemd[1]: Started sshd@38-10.0.0.84:22-10.0.0.1:42598.service - OpenSSH per-connection server daemon (10.0.0.1:42598). Apr 16 01:25:40.723449 sshd[7458]: Accepted publickey for core from 10.0.0.1 port 42598 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:25:40.727946 sshd[7458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:25:40.749120 systemd-logind[1454]: New session 39 of user core. Apr 16 01:25:40.759843 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 16 01:25:41.587181 sshd[7458]: pam_unix(sshd:session): session closed for user core Apr 16 01:25:41.596270 systemd[1]: sshd@38-10.0.0.84:22-10.0.0.1:42598.service: Deactivated successfully. Apr 16 01:25:41.601086 systemd[1]: session-39.scope: Deactivated successfully. Apr 16 01:25:41.607637 systemd-logind[1454]: Session 39 logged out. Waiting for processes to exit. Apr 16 01:25:41.611452 systemd-logind[1454]: Removed session 39. Apr 16 01:25:46.690590 systemd[1]: Started sshd@39-10.0.0.84:22-10.0.0.1:42600.service - OpenSSH per-connection server daemon (10.0.0.1:42600). Apr 16 01:25:46.813438 sshd[7494]: Accepted publickey for core from 10.0.0.1 port 42600 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:25:46.818127 sshd[7494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:25:46.834987 systemd-logind[1454]: New session 40 of user core. Apr 16 01:25:46.849641 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 16 01:25:47.761594 sshd[7494]: pam_unix(sshd:session): session closed for user core Apr 16 01:25:47.777467 systemd[1]: sshd@39-10.0.0.84:22-10.0.0.1:42600.service: Deactivated successfully. Apr 16 01:25:47.790317 systemd[1]: session-40.scope: Deactivated successfully. Apr 16 01:25:47.800764 systemd-logind[1454]: Session 40 logged out. Waiting for processes to exit. Apr 16 01:25:47.806399 systemd-logind[1454]: Removed session 40. Apr 16 01:25:52.903293 systemd[1]: Started sshd@40-10.0.0.84:22-10.0.0.1:37002.service - OpenSSH per-connection server daemon (10.0.0.1:37002). Apr 16 01:25:53.298886 sshd[7510]: Accepted publickey for core from 10.0.0.1 port 37002 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:25:53.301290 sshd[7510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:25:53.328055 systemd-logind[1454]: New session 41 of user core. Apr 16 01:25:53.343939 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 16 01:25:54.095266 sshd[7510]: pam_unix(sshd:session): session closed for user core Apr 16 01:25:54.106429 systemd[1]: sshd@40-10.0.0.84:22-10.0.0.1:37002.service: Deactivated successfully. Apr 16 01:25:54.120141 systemd[1]: session-41.scope: Deactivated successfully. Apr 16 01:25:54.121802 systemd-logind[1454]: Session 41 logged out. Waiting for processes to exit. Apr 16 01:25:54.135444 systemd-logind[1454]: Removed session 41. Apr 16 01:25:59.222065 systemd[1]: Started sshd@41-10.0.0.84:22-10.0.0.1:37014.service - OpenSSH per-connection server daemon (10.0.0.1:37014). Apr 16 01:25:59.425452 sshd[7562]: Accepted publickey for core from 10.0.0.1 port 37014 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:25:59.443361 sshd[7562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:25:59.513246 systemd-logind[1454]: New session 42 of user core. Apr 16 01:25:59.558752 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 16 01:25:59.727829 kubelet[2548]: E0416 01:25:59.727660 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:26:00.456158 sshd[7562]: pam_unix(sshd:session): session closed for user core Apr 16 01:26:00.496811 systemd[1]: sshd@41-10.0.0.84:22-10.0.0.1:37014.service: Deactivated successfully. Apr 16 01:26:00.501581 systemd[1]: session-42.scope: Deactivated successfully. Apr 16 01:26:00.506872 systemd-logind[1454]: Session 42 logged out. Waiting for processes to exit. Apr 16 01:26:00.512162 systemd-logind[1454]: Removed session 42. Apr 16 01:26:05.622020 systemd[1]: Started sshd@42-10.0.0.84:22-10.0.0.1:33746.service - OpenSSH per-connection server daemon (10.0.0.1:33746). Apr 16 01:26:05.956888 sshd[7661]: Accepted publickey for core from 10.0.0.1 port 33746 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:26:05.966795 sshd[7661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:26:06.045449 systemd-logind[1454]: New session 43 of user core. Apr 16 01:26:06.104385 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 16 01:26:07.230960 sshd[7661]: pam_unix(sshd:session): session closed for user core Apr 16 01:26:07.301499 systemd[1]: sshd@42-10.0.0.84:22-10.0.0.1:33746.service: Deactivated successfully. Apr 16 01:26:07.319557 systemd[1]: session-43.scope: Deactivated successfully. Apr 16 01:26:07.345441 systemd-logind[1454]: Session 43 logged out. Waiting for processes to exit. Apr 16 01:26:07.397334 systemd-logind[1454]: Removed session 43. Apr 16 01:26:08.729600 kubelet[2548]: E0416 01:26:08.729407 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:26:11.117761 systemd[1]: run-containerd-runc-k8s.io-2fb7a90a7d3892c694c77b2c85fd0450372c54fa7f4c56bffedc6bca89c85d93-runc.k4KMny.mount: Deactivated successfully. Apr 16 01:26:12.340258 systemd[1]: Started sshd@43-10.0.0.84:22-10.0.0.1:49034.service - OpenSSH per-connection server daemon (10.0.0.1:49034). Apr 16 01:26:12.606348 sshd[7701]: Accepted publickey for core from 10.0.0.1 port 49034 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:26:12.613844 sshd[7701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:26:12.671199 systemd-logind[1454]: New session 44 of user core. Apr 16 01:26:12.756834 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 16 01:26:13.452752 sshd[7701]: pam_unix(sshd:session): session closed for user core Apr 16 01:26:13.502141 systemd[1]: sshd@43-10.0.0.84:22-10.0.0.1:49034.service: Deactivated successfully. Apr 16 01:26:13.516548 systemd[1]: session-44.scope: Deactivated successfully. Apr 16 01:26:13.541522 systemd-logind[1454]: Session 44 logged out. Waiting for processes to exit. Apr 16 01:26:13.548630 systemd-logind[1454]: Removed session 44. Apr 16 01:26:15.794863 kubelet[2548]: E0416 01:26:15.793540 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:26:18.580952 systemd[1]: Started sshd@44-10.0.0.84:22-10.0.0.1:49044.service - OpenSSH per-connection server daemon (10.0.0.1:49044). Apr 16 01:26:18.778766 sshd[7717]: Accepted publickey for core from 10.0.0.1 port 49044 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:26:18.784419 sshd[7717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:26:18.800403 systemd-logind[1454]: New session 45 of user core. Apr 16 01:26:18.823189 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 16 01:26:19.931350 sshd[7717]: pam_unix(sshd:session): session closed for user core Apr 16 01:26:19.965206 systemd[1]: sshd@44-10.0.0.84:22-10.0.0.1:49044.service: Deactivated successfully. Apr 16 01:26:19.978073 systemd[1]: session-45.scope: Deactivated successfully. Apr 16 01:26:19.984627 systemd-logind[1454]: Session 45 logged out. Waiting for processes to exit. Apr 16 01:26:20.028869 systemd[1]: Started sshd@45-10.0.0.84:22-10.0.0.1:35590.service - OpenSSH per-connection server daemon (10.0.0.1:35590). Apr 16 01:26:20.049550 systemd-logind[1454]: Removed session 45. Apr 16 01:26:20.212223 sshd[7732]: Accepted publickey for core from 10.0.0.1 port 35590 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:26:20.217030 sshd[7732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:26:20.251496 systemd-logind[1454]: New session 46 of user core. Apr 16 01:26:20.259629 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 16 01:26:21.733948 kubelet[2548]: E0416 01:26:21.733868 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:26:21.910641 sshd[7732]: pam_unix(sshd:session): session closed for user core Apr 16 01:26:22.067581 systemd[1]: Started sshd@46-10.0.0.84:22-10.0.0.1:35594.service - OpenSSH per-connection server daemon (10.0.0.1:35594). Apr 16 01:26:22.098925 systemd[1]: sshd@45-10.0.0.84:22-10.0.0.1:35590.service: Deactivated successfully. Apr 16 01:26:22.155618 systemd[1]: session-46.scope: Deactivated successfully. Apr 16 01:26:22.164490 systemd[1]: session-46.scope: Consumed 1.013s CPU time. Apr 16 01:26:22.197193 systemd-logind[1454]: Session 46 logged out. Waiting for processes to exit. Apr 16 01:26:22.212834 systemd-logind[1454]: Removed session 46. Apr 16 01:26:22.416198 sshd[7742]: Accepted publickey for core from 10.0.0.1 port 35594 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:26:22.425184 sshd[7742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:26:22.449306 systemd-logind[1454]: New session 47 of user core. Apr 16 01:26:22.519572 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 16 01:26:23.247991 sshd[7742]: pam_unix(sshd:session): session closed for user core Apr 16 01:26:23.321480 systemd[1]: sshd@46-10.0.0.84:22-10.0.0.1:35594.service: Deactivated successfully. Apr 16 01:26:23.332367 systemd[1]: session-47.scope: Deactivated successfully. Apr 16 01:26:23.333592 systemd-logind[1454]: Session 47 logged out. Waiting for processes to exit. Apr 16 01:26:23.337020 systemd-logind[1454]: Removed session 47. Apr 16 01:26:28.341465 systemd[1]: Started sshd@47-10.0.0.84:22-10.0.0.1:35608.service - OpenSSH per-connection server daemon (10.0.0.1:35608). Apr 16 01:26:28.596647 sshd[7759]: Accepted publickey for core from 10.0.0.1 port 35608 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:26:28.703843 sshd[7759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:26:28.732958 kubelet[2548]: E0416 01:26:28.732659 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:26:28.803742 systemd-logind[1454]: New session 48 of user core. Apr 16 01:26:28.831243 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 16 01:26:29.937325 sshd[7759]: pam_unix(sshd:session): session closed for user core Apr 16 01:26:29.957437 systemd[1]: sshd@47-10.0.0.84:22-10.0.0.1:35608.service: Deactivated successfully. Apr 16 01:26:29.964192 systemd[1]: session-48.scope: Deactivated successfully. Apr 16 01:26:29.979809 systemd-logind[1454]: Session 48 logged out. Waiting for processes to exit. Apr 16 01:26:29.995265 systemd-logind[1454]: Removed session 48. Apr 16 01:26:33.740959 kubelet[2548]: E0416 01:26:33.738449 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:26:35.004531 systemd[1]: Started sshd@48-10.0.0.84:22-10.0.0.1:58134.service - OpenSSH per-connection server daemon (10.0.0.1:58134). Apr 16 01:26:35.350564 sshd[7804]: Accepted publickey for core from 10.0.0.1 port 58134 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:26:35.357108 sshd[7804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:26:35.386607 systemd-logind[1454]: New session 49 of user core. Apr 16 01:26:35.428589 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 16 01:26:36.866423 sshd[7804]: pam_unix(sshd:session): session closed for user core Apr 16 01:26:36.895048 systemd[1]: sshd@48-10.0.0.84:22-10.0.0.1:58134.service: Deactivated successfully. Apr 16 01:26:36.922336 systemd[1]: session-49.scope: Deactivated successfully. Apr 16 01:26:36.928881 systemd-logind[1454]: Session 49 logged out. Waiting for processes to exit. Apr 16 01:26:36.943496 systemd-logind[1454]: Removed session 49. Apr 16 01:26:41.994386 systemd[1]: Started sshd@49-10.0.0.84:22-10.0.0.1:44224.service - OpenSSH per-connection server daemon (10.0.0.1:44224). Apr 16 01:26:42.419106 sshd[7872]: Accepted publickey for core from 10.0.0.1 port 44224 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:26:42.423423 sshd[7872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:26:42.451284 systemd-logind[1454]: New session 50 of user core. Apr 16 01:26:42.517796 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 16 01:26:43.429774 sshd[7872]: pam_unix(sshd:session): session closed for user core Apr 16 01:26:43.449364 systemd[1]: sshd@49-10.0.0.84:22-10.0.0.1:44224.service: Deactivated successfully. Apr 16 01:26:43.548081 systemd[1]: session-50.scope: Deactivated successfully. Apr 16 01:26:43.561167 systemd-logind[1454]: Session 50 logged out. Waiting for processes to exit. Apr 16 01:26:43.563772 systemd-logind[1454]: Removed session 50. Apr 16 01:26:45.730621 kubelet[2548]: E0416 01:26:45.730461 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:26:48.523629 systemd[1]: Started sshd@50-10.0.0.84:22-10.0.0.1:44238.service - OpenSSH per-connection server daemon (10.0.0.1:44238). Apr 16 01:26:48.900830 sshd[7887]: Accepted publickey for core from 10.0.0.1 port 44238 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:26:48.904294 sshd[7887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:26:48.957823 systemd-logind[1454]: New session 51 of user core. Apr 16 01:26:49.031059 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 16 01:26:49.953080 sshd[7887]: pam_unix(sshd:session): session closed for user core Apr 16 01:26:49.971780 systemd-logind[1454]: Session 51 logged out. Waiting for processes to exit. Apr 16 01:26:49.976540 systemd[1]: sshd@50-10.0.0.84:22-10.0.0.1:44238.service: Deactivated successfully. Apr 16 01:26:50.034994 systemd[1]: session-51.scope: Deactivated successfully. Apr 16 01:26:50.047555 systemd-logind[1454]: Removed session 51. Apr 16 01:26:55.106046 systemd[1]: Started sshd@51-10.0.0.84:22-10.0.0.1:60436.service - OpenSSH per-connection server daemon (10.0.0.1:60436). Apr 16 01:26:55.530305 sshd[7901]: Accepted publickey for core from 10.0.0.1 port 60436 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:26:55.545957 sshd[7901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:26:55.658882 systemd-logind[1454]: New session 52 of user core. Apr 16 01:26:55.748625 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 16 01:26:56.889813 sshd[7901]: pam_unix(sshd:session): session closed for user core Apr 16 01:26:56.922375 systemd[1]: sshd@51-10.0.0.84:22-10.0.0.1:60436.service: Deactivated successfully. Apr 16 01:26:56.969495 systemd[1]: session-52.scope: Deactivated successfully. Apr 16 01:26:57.026656 systemd-logind[1454]: Session 52 logged out. Waiting for processes to exit. Apr 16 01:26:57.038196 systemd-logind[1454]: Removed session 52. Apr 16 01:27:02.017603 systemd[1]: Started sshd@52-10.0.0.84:22-10.0.0.1:55582.service - OpenSSH per-connection server daemon (10.0.0.1:55582). Apr 16 01:27:02.303080 systemd[1]: run-containerd-runc-k8s.io-407adcfeb85f2ea58cc3cb0afdb7fe6b6fc4ac3074e5ae01309abf26a5edf351-runc.G81u2u.mount: Deactivated successfully. Apr 16 01:27:02.485365 sshd[7918]: Accepted publickey for core from 10.0.0.1 port 55582 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:27:02.488924 sshd[7918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:27:02.552216 systemd-logind[1454]: New session 53 of user core. Apr 16 01:27:02.638559 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 16 01:27:03.712911 sshd[7918]: pam_unix(sshd:session): session closed for user core Apr 16 01:27:03.761960 systemd[1]: sshd@52-10.0.0.84:22-10.0.0.1:55582.service: Deactivated successfully. Apr 16 01:27:03.823368 systemd[1]: session-53.scope: Deactivated successfully. Apr 16 01:27:03.828781 systemd-logind[1454]: Session 53 logged out. Waiting for processes to exit. Apr 16 01:27:03.837994 systemd-logind[1454]: Removed session 53. Apr 16 01:27:08.836251 systemd[1]: Started sshd@53-10.0.0.84:22-10.0.0.1:55596.service - OpenSSH per-connection server daemon (10.0.0.1:55596). Apr 16 01:27:09.303429 sshd[8021]: Accepted publickey for core from 10.0.0.1 port 55596 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:27:09.352825 sshd[8021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:27:09.417929 systemd-logind[1454]: New session 54 of user core. Apr 16 01:27:09.426776 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 16 01:27:10.751507 sshd[8021]: pam_unix(sshd:session): session closed for user core Apr 16 01:27:10.770528 systemd[1]: sshd@53-10.0.0.84:22-10.0.0.1:55596.service: Deactivated successfully. Apr 16 01:27:10.818963 systemd[1]: session-54.scope: Deactivated successfully. Apr 16 01:27:10.832202 systemd-logind[1454]: Session 54 logged out. Waiting for processes to exit. Apr 16 01:27:10.837968 systemd-logind[1454]: Removed session 54. Apr 16 01:27:15.938320 systemd[1]: Started sshd@54-10.0.0.84:22-10.0.0.1:59580.service - OpenSSH per-connection server daemon (10.0.0.1:59580). Apr 16 01:27:16.081378 sshd[8059]: Accepted publickey for core from 10.0.0.1 port 59580 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:27:16.086447 sshd[8059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:27:16.107510 systemd-logind[1454]: New session 55 of user core. Apr 16 01:27:16.122331 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 16 01:27:17.109446 sshd[8059]: pam_unix(sshd:session): session closed for user core Apr 16 01:27:17.130547 systemd[1]: sshd@54-10.0.0.84:22-10.0.0.1:59580.service: Deactivated successfully. Apr 16 01:27:17.139847 systemd[1]: session-55.scope: Deactivated successfully. Apr 16 01:27:17.158024 systemd-logind[1454]: Session 55 logged out. Waiting for processes to exit. Apr 16 01:27:17.169948 systemd-logind[1454]: Removed session 55. Apr 16 01:27:17.736202 kubelet[2548]: E0416 01:27:17.734417 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:27:22.212341 systemd[1]: Started sshd@55-10.0.0.84:22-10.0.0.1:47476.service - OpenSSH per-connection server daemon (10.0.0.1:47476). Apr 16 01:27:22.383441 sshd[8073]: Accepted publickey for core from 10.0.0.1 port 47476 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:27:22.391109 sshd[8073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:27:22.456929 systemd-logind[1454]: New session 56 of user core. Apr 16 01:27:22.465974 systemd[1]: Started session-56.scope - Session 56 of User core. Apr 16 01:27:22.739363 kubelet[2548]: E0416 01:27:22.739077 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:27:23.252562 sshd[8073]: pam_unix(sshd:session): session closed for user core Apr 16 01:27:23.269368 systemd[1]: sshd@55-10.0.0.84:22-10.0.0.1:47476.service: Deactivated successfully. Apr 16 01:27:23.313280 systemd[1]: session-56.scope: Deactivated successfully. Apr 16 01:27:23.316651 systemd-logind[1454]: Session 56 logged out. Waiting for processes to exit. Apr 16 01:27:23.323476 systemd-logind[1454]: Removed session 56. Apr 16 01:27:25.729838 kubelet[2548]: E0416 01:27:25.729606 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:27:28.326565 systemd[1]: Started sshd@56-10.0.0.84:22-10.0.0.1:47486.service - OpenSSH per-connection server daemon (10.0.0.1:47486). Apr 16 01:27:28.573833 sshd[8104]: Accepted publickey for core from 10.0.0.1 port 47486 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:27:28.577876 sshd[8104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:27:28.611103 systemd-logind[1454]: New session 57 of user core. Apr 16 01:27:28.662554 systemd[1]: Started session-57.scope - Session 57 of User core. Apr 16 01:27:29.427714 sshd[8104]: pam_unix(sshd:session): session closed for user core Apr 16 01:27:29.464552 systemd[1]: sshd@56-10.0.0.84:22-10.0.0.1:47486.service: Deactivated successfully. Apr 16 01:27:29.522201 systemd[1]: session-57.scope: Deactivated successfully. Apr 16 01:27:29.542282 systemd-logind[1454]: Session 57 logged out. Waiting for processes to exit. Apr 16 01:27:29.543833 systemd-logind[1454]: Removed session 57. Apr 16 01:27:34.529642 systemd[1]: Started sshd@57-10.0.0.84:22-10.0.0.1:34438.service - OpenSSH per-connection server daemon (10.0.0.1:34438). Apr 16 01:27:34.822585 sshd[8163]: Accepted publickey for core from 10.0.0.1 port 34438 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:27:34.828489 sshd[8163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:27:34.938788 systemd-logind[1454]: New session 58 of user core. Apr 16 01:27:34.959293 systemd[1]: Started session-58.scope - Session 58 of User core. Apr 16 01:27:35.736352 kubelet[2548]: E0416 01:27:35.736150 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:27:36.337418 sshd[8163]: pam_unix(sshd:session): session closed for user core Apr 16 01:27:36.361510 systemd[1]: sshd@57-10.0.0.84:22-10.0.0.1:34438.service: Deactivated successfully. Apr 16 01:27:36.365970 systemd[1]: session-58.scope: Deactivated successfully. Apr 16 01:27:36.373044 systemd-logind[1454]: Session 58 logged out. Waiting for processes to exit. Apr 16 01:27:36.382624 systemd-logind[1454]: Removed session 58. Apr 16 01:27:41.400230 systemd[1]: Started sshd@58-10.0.0.84:22-10.0.0.1:60262.service - OpenSSH per-connection server daemon (10.0.0.1:60262). Apr 16 01:27:42.073298 sshd[8235]: Accepted publickey for core from 10.0.0.1 port 60262 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:27:42.078022 sshd[8235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:27:42.110270 systemd-logind[1454]: New session 59 of user core. Apr 16 01:27:42.128795 systemd[1]: Started session-59.scope - Session 59 of User core. Apr 16 01:27:42.928360 sshd[8235]: pam_unix(sshd:session): session closed for user core Apr 16 01:27:42.961416 systemd-logind[1454]: Session 59 logged out. Waiting for processes to exit. Apr 16 01:27:42.962162 systemd[1]: sshd@58-10.0.0.84:22-10.0.0.1:60262.service: Deactivated successfully. Apr 16 01:27:42.981605 systemd[1]: session-59.scope: Deactivated successfully. Apr 16 01:27:42.995033 systemd-logind[1454]: Removed session 59. Apr 16 01:27:45.763377 kubelet[2548]: E0416 01:27:45.762454 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:27:48.093912 systemd[1]: Started sshd@59-10.0.0.84:22-10.0.0.1:60270.service - OpenSSH per-connection server daemon (10.0.0.1:60270). Apr 16 01:27:48.361032 sshd[8250]: Accepted publickey for core from 10.0.0.1 port 60270 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:27:48.385867 sshd[8250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:27:48.515053 systemd-logind[1454]: New session 60 of user core. Apr 16 01:27:48.541877 systemd[1]: Started session-60.scope - Session 60 of User core. Apr 16 01:27:49.438896 sshd[8250]: pam_unix(sshd:session): session closed for user core Apr 16 01:27:49.465894 systemd[1]: sshd@59-10.0.0.84:22-10.0.0.1:60270.service: Deactivated successfully. Apr 16 01:27:49.494513 systemd[1]: session-60.scope: Deactivated successfully. Apr 16 01:27:49.496864 systemd-logind[1454]: Session 60 logged out. Waiting for processes to exit. Apr 16 01:27:49.501883 systemd-logind[1454]: Removed session 60. Apr 16 01:27:54.507369 systemd[1]: Started sshd@60-10.0.0.84:22-10.0.0.1:41976.service - OpenSSH per-connection server daemon (10.0.0.1:41976). Apr 16 01:27:54.727375 sshd[8268]: Accepted publickey for core from 10.0.0.1 port 41976 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:27:54.744036 sshd[8268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:27:54.817817 systemd-logind[1454]: New session 61 of user core. Apr 16 01:27:54.841597 systemd[1]: Started session-61.scope - Session 61 of User core. Apr 16 01:27:55.900758 sshd[8268]: pam_unix(sshd:session): session closed for user core Apr 16 01:27:55.976974 systemd[1]: sshd@60-10.0.0.84:22-10.0.0.1:41976.service: Deactivated successfully. Apr 16 01:27:55.994062 systemd[1]: session-61.scope: Deactivated successfully. Apr 16 01:27:56.001991 systemd-logind[1454]: Session 61 logged out. Waiting for processes to exit. Apr 16 01:27:56.007926 systemd-logind[1454]: Removed session 61. Apr 16 01:28:01.051759 systemd[1]: Started sshd@61-10.0.0.84:22-10.0.0.1:45106.service - OpenSSH per-connection server daemon (10.0.0.1:45106). Apr 16 01:28:01.318978 sshd[8286]: Accepted publickey for core from 10.0.0.1 port 45106 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:28:01.344248 sshd[8286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:28:01.411603 systemd-logind[1454]: New session 62 of user core. Apr 16 01:28:01.444626 systemd[1]: Started session-62.scope - Session 62 of User core. Apr 16 01:28:02.312749 sshd[8286]: pam_unix(sshd:session): session closed for user core Apr 16 01:28:02.360206 systemd[1]: sshd@61-10.0.0.84:22-10.0.0.1:45106.service: Deactivated successfully. Apr 16 01:28:02.447289 systemd[1]: session-62.scope: Deactivated successfully. Apr 16 01:28:02.460493 systemd-logind[1454]: Session 62 logged out. Waiting for processes to exit. Apr 16 01:28:02.480554 systemd-logind[1454]: Removed session 62. Apr 16 01:28:03.733259 kubelet[2548]: E0416 01:28:03.733203 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:28:07.390184 systemd[1]: Started sshd@62-10.0.0.84:22-10.0.0.1:45108.service - OpenSSH per-connection server daemon (10.0.0.1:45108). Apr 16 01:28:07.736457 sshd[8388]: Accepted publickey for core from 10.0.0.1 port 45108 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:28:07.738659 sshd[8388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:28:07.782158 systemd-logind[1454]: New session 63 of user core. Apr 16 01:28:07.799345 systemd[1]: Started session-63.scope - Session 63 of User core. Apr 16 01:28:09.121201 sshd[8388]: pam_unix(sshd:session): session closed for user core Apr 16 01:28:09.185034 systemd[1]: sshd@62-10.0.0.84:22-10.0.0.1:45108.service: Deactivated successfully. Apr 16 01:28:09.209366 systemd[1]: session-63.scope: Deactivated successfully. Apr 16 01:28:09.261908 systemd-logind[1454]: Session 63 logged out. Waiting for processes to exit. Apr 16 01:28:09.284283 systemd-logind[1454]: Removed session 63. Apr 16 01:28:14.339500 systemd[1]: Started sshd@63-10.0.0.84:22-10.0.0.1:52896.service - OpenSSH per-connection server daemon (10.0.0.1:52896). Apr 16 01:28:14.629160 sshd[8427]: Accepted publickey for core from 10.0.0.1 port 52896 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:28:14.631328 sshd[8427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:28:14.648772 systemd-logind[1454]: New session 64 of user core. Apr 16 01:28:14.662028 systemd[1]: Started session-64.scope - Session 64 of User core. Apr 16 01:28:14.769327 kubelet[2548]: E0416 01:28:14.769173 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:28:15.431391 sshd[8427]: pam_unix(sshd:session): session closed for user core Apr 16 01:28:15.507395 systemd[1]: sshd@63-10.0.0.84:22-10.0.0.1:52896.service: Deactivated successfully. Apr 16 01:28:15.527583 systemd[1]: session-64.scope: Deactivated successfully. Apr 16 01:28:15.531544 systemd-logind[1454]: Session 64 logged out. Waiting for processes to exit. Apr 16 01:28:15.550826 systemd-logind[1454]: Removed session 64. Apr 16 01:28:20.470542 systemd[1]: Started sshd@64-10.0.0.84:22-10.0.0.1:43294.service - OpenSSH per-connection server daemon (10.0.0.1:43294). Apr 16 01:28:20.654999 sshd[8442]: Accepted publickey for core from 10.0.0.1 port 43294 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:28:20.671177 sshd[8442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:28:20.732006 systemd-logind[1454]: New session 65 of user core. Apr 16 01:28:20.741884 systemd[1]: Started session-65.scope - Session 65 of User core. Apr 16 01:28:21.549012 sshd[8442]: pam_unix(sshd:session): session closed for user core Apr 16 01:28:21.633052 systemd[1]: sshd@64-10.0.0.84:22-10.0.0.1:43294.service: Deactivated successfully. Apr 16 01:28:21.635961 systemd[1]: session-65.scope: Deactivated successfully. Apr 16 01:28:21.646297 systemd-logind[1454]: Session 65 logged out. Waiting for processes to exit. Apr 16 01:28:21.654046 systemd-logind[1454]: Removed session 65. Apr 16 01:28:26.615957 systemd[1]: Started sshd@65-10.0.0.84:22-10.0.0.1:43310.service - OpenSSH per-connection server daemon (10.0.0.1:43310). Apr 16 01:28:26.757393 sshd[8458]: Accepted publickey for core from 10.0.0.1 port 43310 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:28:26.759579 sshd[8458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:28:26.795756 systemd-logind[1454]: New session 66 of user core. Apr 16 01:28:26.811197 systemd[1]: Started session-66.scope - Session 66 of User core. Apr 16 01:28:27.792851 sshd[8458]: pam_unix(sshd:session): session closed for user core Apr 16 01:28:27.810165 systemd[1]: sshd@65-10.0.0.84:22-10.0.0.1:43310.service: Deactivated successfully. Apr 16 01:28:27.850996 systemd[1]: session-66.scope: Deactivated successfully. Apr 16 01:28:27.857707 systemd-logind[1454]: Session 66 logged out. Waiting for processes to exit. Apr 16 01:28:27.862735 systemd-logind[1454]: Removed session 66. Apr 16 01:28:32.828564 systemd[1]: Started sshd@66-10.0.0.84:22-10.0.0.1:42262.service - OpenSSH per-connection server daemon (10.0.0.1:42262). Apr 16 01:28:32.990310 sshd[8472]: Accepted publickey for core from 10.0.0.1 port 42262 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:28:32.998178 sshd[8472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:28:33.018422 systemd-logind[1454]: New session 67 of user core. Apr 16 01:28:33.024990 systemd[1]: Started session-67.scope - Session 67 of User core. Apr 16 01:28:33.485140 sshd[8472]: pam_unix(sshd:session): session closed for user core Apr 16 01:28:33.504365 systemd[1]: sshd@66-10.0.0.84:22-10.0.0.1:42262.service: Deactivated successfully. Apr 16 01:28:33.579234 systemd[1]: session-67.scope: Deactivated successfully. Apr 16 01:28:33.586942 systemd-logind[1454]: Session 67 logged out. Waiting for processes to exit. Apr 16 01:28:33.602481 systemd-logind[1454]: Removed session 67. Apr 16 01:28:35.751429 kubelet[2548]: E0416 01:28:35.750493 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:28:36.739750 kubelet[2548]: E0416 01:28:36.736458 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:28:38.582324 systemd[1]: Started sshd@67-10.0.0.84:22-10.0.0.1:42272.service - OpenSSH per-connection server daemon (10.0.0.1:42272). Apr 16 01:28:38.960409 sshd[8532]: Accepted publickey for core from 10.0.0.1 port 42272 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:28:38.986552 sshd[8532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:28:39.127545 systemd-logind[1454]: New session 68 of user core. Apr 16 01:28:39.174011 systemd[1]: Started session-68.scope - Session 68 of User core. Apr 16 01:28:40.321072 sshd[8532]: pam_unix(sshd:session): session closed for user core Apr 16 01:28:40.362032 systemd[1]: sshd@67-10.0.0.84:22-10.0.0.1:42272.service: Deactivated successfully. Apr 16 01:28:40.382383 systemd[1]: session-68.scope: Deactivated successfully. Apr 16 01:28:40.386230 systemd-logind[1454]: Session 68 logged out. Waiting for processes to exit. Apr 16 01:28:40.390457 systemd-logind[1454]: Removed session 68. Apr 16 01:28:45.441391 systemd[1]: Started sshd@68-10.0.0.84:22-10.0.0.1:45064.service - OpenSSH per-connection server daemon (10.0.0.1:45064). Apr 16 01:28:45.646327 sshd[8569]: Accepted publickey for core from 10.0.0.1 port 45064 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:28:45.649432 sshd[8569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:28:45.704441 systemd-logind[1454]: New session 69 of user core. Apr 16 01:28:45.725382 systemd[1]: Started session-69.scope - Session 69 of User core. Apr 16 01:28:46.286574 sshd[8569]: pam_unix(sshd:session): session closed for user core Apr 16 01:28:46.296282 systemd[1]: sshd@68-10.0.0.84:22-10.0.0.1:45064.service: Deactivated successfully. Apr 16 01:28:46.304175 systemd[1]: session-69.scope: Deactivated successfully. Apr 16 01:28:46.309364 systemd-logind[1454]: Session 69 logged out. Waiting for processes to exit. Apr 16 01:28:46.313148 systemd-logind[1454]: Removed session 69. Apr 16 01:28:46.736898 kubelet[2548]: E0416 01:28:46.735648 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:28:48.732198 kubelet[2548]: E0416 01:28:48.731403 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:28:51.428360 systemd[1]: Started sshd@69-10.0.0.84:22-10.0.0.1:47232.service - OpenSSH per-connection server daemon (10.0.0.1:47232). Apr 16 01:28:51.663934 sshd[8583]: Accepted publickey for core from 10.0.0.1 port 47232 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:28:51.741995 sshd[8583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:28:51.791840 systemd-logind[1454]: New session 70 of user core. Apr 16 01:28:51.800189 systemd[1]: Started session-70.scope - Session 70 of User core. Apr 16 01:28:52.694488 sshd[8583]: pam_unix(sshd:session): session closed for user core Apr 16 01:28:52.718114 systemd[1]: sshd@69-10.0.0.84:22-10.0.0.1:47232.service: Deactivated successfully. Apr 16 01:28:52.721433 systemd[1]: session-70.scope: Deactivated successfully. Apr 16 01:28:52.728359 systemd-logind[1454]: Session 70 logged out. Waiting for processes to exit. Apr 16 01:28:52.774327 systemd-logind[1454]: Removed session 70. Apr 16 01:28:57.891347 systemd[1]: Started sshd@70-10.0.0.84:22-10.0.0.1:47246.service - OpenSSH per-connection server daemon (10.0.0.1:47246). Apr 16 01:28:58.333204 sshd[8597]: Accepted publickey for core from 10.0.0.1 port 47246 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:28:58.349524 sshd[8597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:28:58.489165 systemd-logind[1454]: New session 71 of user core. Apr 16 01:28:58.508118 systemd[1]: Started session-71.scope - Session 71 of User core. Apr 16 01:28:58.762210 kubelet[2548]: E0416 01:28:58.759999 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:28:59.971226 sshd[8597]: pam_unix(sshd:session): session closed for user core Apr 16 01:29:00.045651 systemd[1]: sshd@70-10.0.0.84:22-10.0.0.1:47246.service: Deactivated successfully. Apr 16 01:29:00.058924 systemd[1]: session-71.scope: Deactivated successfully. Apr 16 01:29:00.119458 systemd-logind[1454]: Session 71 logged out. Waiting for processes to exit. Apr 16 01:29:00.160659 systemd-logind[1454]: Removed session 71. Apr 16 01:29:05.038312 systemd[1]: Started sshd@71-10.0.0.84:22-10.0.0.1:48752.service - OpenSSH per-connection server daemon (10.0.0.1:48752). Apr 16 01:29:05.316569 sshd[8707]: Accepted publickey for core from 10.0.0.1 port 48752 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:29:05.332814 sshd[8707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:29:05.401077 systemd-logind[1454]: New session 72 of user core. Apr 16 01:29:05.447585 systemd[1]: Started session-72.scope - Session 72 of User core. Apr 16 01:29:07.016984 sshd[8707]: pam_unix(sshd:session): session closed for user core Apr 16 01:29:07.041524 systemd[1]: sshd@71-10.0.0.84:22-10.0.0.1:48752.service: Deactivated successfully. Apr 16 01:29:07.087920 systemd[1]: session-72.scope: Deactivated successfully. Apr 16 01:29:07.094227 systemd-logind[1454]: Session 72 logged out. Waiting for processes to exit. Apr 16 01:29:07.096994 systemd-logind[1454]: Removed session 72. Apr 16 01:29:11.755551 kubelet[2548]: E0416 01:29:11.755102 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:29:12.328559 systemd[1]: Started sshd@72-10.0.0.84:22-10.0.0.1:48370.service - OpenSSH per-connection server daemon (10.0.0.1:48370). Apr 16 01:29:13.392932 sshd[8771]: Accepted publickey for core from 10.0.0.1 port 48370 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:29:13.395878 sshd[8771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:29:13.514914 systemd-logind[1454]: New session 73 of user core. Apr 16 01:29:13.531123 systemd[1]: Started session-73.scope - Session 73 of User core. Apr 16 01:29:14.922844 sshd[8771]: pam_unix(sshd:session): session closed for user core Apr 16 01:29:15.005913 systemd[1]: sshd@72-10.0.0.84:22-10.0.0.1:48370.service: Deactivated successfully. Apr 16 01:29:15.023642 systemd[1]: session-73.scope: Deactivated successfully. Apr 16 01:29:15.030780 systemd-logind[1454]: Session 73 logged out. Waiting for processes to exit. Apr 16 01:29:15.122435 systemd-logind[1454]: Removed session 73. Apr 16 01:29:20.082808 systemd[1]: Started sshd@73-10.0.0.84:22-10.0.0.1:51466.service - OpenSSH per-connection server daemon (10.0.0.1:51466). Apr 16 01:29:20.322795 sshd[8786]: Accepted publickey for core from 10.0.0.1 port 51466 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:29:20.337981 sshd[8786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:29:20.434461 systemd-logind[1454]: New session 74 of user core. Apr 16 01:29:20.451335 systemd[1]: Started session-74.scope - Session 74 of User core. Apr 16 01:29:21.732402 kubelet[2548]: E0416 01:29:21.731396 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:29:21.811906 sshd[8786]: pam_unix(sshd:session): session closed for user core Apr 16 01:29:21.918425 systemd[1]: sshd@73-10.0.0.84:22-10.0.0.1:51466.service: Deactivated successfully. Apr 16 01:29:21.957050 systemd[1]: session-74.scope: Deactivated successfully. Apr 16 01:29:22.012558 systemd-logind[1454]: Session 74 logged out. Waiting for processes to exit. Apr 16 01:29:22.029217 systemd-logind[1454]: Removed session 74. Apr 16 01:29:26.997423 systemd[1]: Started sshd@74-10.0.0.84:22-10.0.0.1:51470.service - OpenSSH per-connection server daemon (10.0.0.1:51470). Apr 16 01:29:27.247875 sshd[8800]: Accepted publickey for core from 10.0.0.1 port 51470 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:29:27.284200 sshd[8800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:29:27.426330 systemd-logind[1454]: New session 75 of user core. Apr 16 01:29:27.445901 systemd[1]: Started session-75.scope - Session 75 of User core. Apr 16 01:29:28.688242 sshd[8800]: pam_unix(sshd:session): session closed for user core Apr 16 01:29:28.759734 systemd[1]: sshd@74-10.0.0.84:22-10.0.0.1:51470.service: Deactivated successfully. Apr 16 01:29:28.824520 systemd[1]: session-75.scope: Deactivated successfully. Apr 16 01:29:28.830859 systemd-logind[1454]: Session 75 logged out. Waiting for processes to exit. Apr 16 01:29:28.833803 systemd-logind[1454]: Removed session 75. Apr 16 01:29:33.769150 systemd[1]: Started sshd@75-10.0.0.84:22-10.0.0.1:46344.service - OpenSSH per-connection server daemon (10.0.0.1:46344). Apr 16 01:29:34.032260 sshd[8826]: Accepted publickey for core from 10.0.0.1 port 46344 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:29:34.055638 sshd[8826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:29:34.098029 systemd-logind[1454]: New session 76 of user core. Apr 16 01:29:34.109158 systemd[1]: Started session-76.scope - Session 76 of User core. Apr 16 01:29:35.106281 sshd[8826]: pam_unix(sshd:session): session closed for user core Apr 16 01:29:35.129527 systemd-logind[1454]: Session 76 logged out. Waiting for processes to exit. Apr 16 01:29:35.140585 systemd[1]: sshd@75-10.0.0.84:22-10.0.0.1:46344.service: Deactivated successfully. Apr 16 01:29:35.223019 systemd[1]: session-76.scope: Deactivated successfully. Apr 16 01:29:35.236401 systemd-logind[1454]: Removed session 76. Apr 16 01:29:40.132413 systemd[1]: Started sshd@76-10.0.0.84:22-10.0.0.1:58896.service - OpenSSH per-connection server daemon (10.0.0.1:58896). Apr 16 01:29:40.531565 sshd[8886]: Accepted publickey for core from 10.0.0.1 port 58896 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:29:40.535792 sshd[8886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:29:40.563745 systemd-logind[1454]: New session 77 of user core. Apr 16 01:29:40.582648 systemd[1]: Started session-77.scope - Session 77 of User core. Apr 16 01:29:41.918293 sshd[8886]: pam_unix(sshd:session): session closed for user core Apr 16 01:29:41.950849 systemd-logind[1454]: Session 77 logged out. Waiting for processes to exit. Apr 16 01:29:41.957461 systemd[1]: sshd@76-10.0.0.84:22-10.0.0.1:58896.service: Deactivated successfully. Apr 16 01:29:41.973550 systemd[1]: session-77.scope: Deactivated successfully. Apr 16 01:29:41.982639 systemd-logind[1454]: Removed session 77. Apr 16 01:29:42.727916 kubelet[2548]: E0416 01:29:42.727766 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:29:46.920995 systemd[1]: Started sshd@77-10.0.0.84:22-10.0.0.1:58902.service - OpenSSH per-connection server daemon (10.0.0.1:58902). Apr 16 01:29:47.087247 sshd[8927]: Accepted publickey for core from 10.0.0.1 port 58902 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:29:47.088986 sshd[8927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:29:47.100504 systemd-logind[1454]: New session 78 of user core. Apr 16 01:29:47.116586 systemd[1]: Started session-78.scope - Session 78 of User core. Apr 16 01:29:47.739895 sshd[8927]: pam_unix(sshd:session): session closed for user core Apr 16 01:29:47.842255 systemd[1]: sshd@77-10.0.0.84:22-10.0.0.1:58902.service: Deactivated successfully. Apr 16 01:29:47.850864 systemd[1]: session-78.scope: Deactivated successfully. Apr 16 01:29:47.871928 systemd-logind[1454]: Session 78 logged out. Waiting for processes to exit. Apr 16 01:29:47.878951 systemd-logind[1454]: Removed session 78. Apr 16 01:29:50.743265 kubelet[2548]: E0416 01:29:50.742120 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:29:52.999974 systemd[1]: Started sshd@78-10.0.0.84:22-10.0.0.1:53142.service - OpenSSH per-connection server daemon (10.0.0.1:53142). Apr 16 01:29:53.414398 sshd[8943]: Accepted publickey for core from 10.0.0.1 port 53142 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:29:53.428231 sshd[8943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:29:53.638968 systemd-logind[1454]: New session 79 of user core. Apr 16 01:29:53.656093 systemd[1]: Started session-79.scope - Session 79 of User core. Apr 16 01:29:54.548113 sshd[8943]: pam_unix(sshd:session): session closed for user core Apr 16 01:29:54.651061 systemd[1]: sshd@78-10.0.0.84:22-10.0.0.1:53142.service: Deactivated successfully. Apr 16 01:29:54.656243 systemd[1]: session-79.scope: Deactivated successfully. Apr 16 01:29:54.658567 systemd-logind[1454]: Session 79 logged out. Waiting for processes to exit. Apr 16 01:29:54.661488 systemd-logind[1454]: Removed session 79. Apr 16 01:29:55.729043 kubelet[2548]: E0416 01:29:55.728598 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:29:59.647449 systemd[1]: Started sshd@79-10.0.0.84:22-10.0.0.1:36438.service - OpenSSH per-connection server daemon (10.0.0.1:36438). Apr 16 01:30:00.022572 sshd[8961]: Accepted publickey for core from 10.0.0.1 port 36438 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:30:00.064402 sshd[8961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:30:00.181203 systemd-logind[1454]: New session 80 of user core. Apr 16 01:30:00.198073 systemd[1]: Started session-80.scope - Session 80 of User core. Apr 16 01:30:00.993302 sshd[8961]: pam_unix(sshd:session): session closed for user core Apr 16 01:30:01.013216 systemd[1]: sshd@79-10.0.0.84:22-10.0.0.1:36438.service: Deactivated successfully. Apr 16 01:30:01.022451 systemd[1]: session-80.scope: Deactivated successfully. Apr 16 01:30:01.068294 systemd-logind[1454]: Session 80 logged out. Waiting for processes to exit. Apr 16 01:30:01.156847 systemd-logind[1454]: Removed session 80. Apr 16 01:30:02.222224 systemd[1]: run-containerd-runc-k8s.io-407adcfeb85f2ea58cc3cb0afdb7fe6b6fc4ac3074e5ae01309abf26a5edf351-runc.OyRz78.mount: Deactivated successfully. Apr 16 01:30:06.055748 systemd[1]: Started sshd@80-10.0.0.84:22-10.0.0.1:36442.service - OpenSSH per-connection server daemon (10.0.0.1:36442). Apr 16 01:30:06.311814 sshd[9058]: Accepted publickey for core from 10.0.0.1 port 36442 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:30:06.321025 sshd[9058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:30:06.347908 systemd-logind[1454]: New session 81 of user core. Apr 16 01:30:06.362567 systemd[1]: Started session-81.scope - Session 81 of User core. Apr 16 01:30:07.099659 sshd[9058]: pam_unix(sshd:session): session closed for user core Apr 16 01:30:07.112079 systemd[1]: sshd@80-10.0.0.84:22-10.0.0.1:36442.service: Deactivated successfully. Apr 16 01:30:07.121934 systemd[1]: session-81.scope: Deactivated successfully. Apr 16 01:30:07.123018 systemd-logind[1454]: Session 81 logged out. Waiting for processes to exit. Apr 16 01:30:07.132635 systemd-logind[1454]: Removed session 81. Apr 16 01:30:07.758795 kubelet[2548]: E0416 01:30:07.758121 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:30:12.160142 systemd[1]: Started sshd@81-10.0.0.84:22-10.0.0.1:32778.service - OpenSSH per-connection server daemon (10.0.0.1:32778). Apr 16 01:30:12.433267 sshd[9095]: Accepted publickey for core from 10.0.0.1 port 32778 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:30:12.443289 sshd[9095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:30:12.499542 systemd-logind[1454]: New session 82 of user core. Apr 16 01:30:12.519236 systemd[1]: Started session-82.scope - Session 82 of User core. Apr 16 01:30:13.046082 sshd[9095]: pam_unix(sshd:session): session closed for user core Apr 16 01:30:13.108639 systemd[1]: sshd@81-10.0.0.84:22-10.0.0.1:32778.service: Deactivated successfully. Apr 16 01:30:13.122877 systemd[1]: session-82.scope: Deactivated successfully. Apr 16 01:30:13.137121 systemd-logind[1454]: Session 82 logged out. Waiting for processes to exit. Apr 16 01:30:13.158635 systemd[1]: Started sshd@82-10.0.0.84:22-10.0.0.1:32786.service - OpenSSH per-connection server daemon (10.0.0.1:32786). Apr 16 01:30:13.168128 systemd-logind[1454]: Removed session 82. Apr 16 01:30:13.295189 sshd[9109]: Accepted publickey for core from 10.0.0.1 port 32786 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:30:13.298211 sshd[9109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:30:13.306166 systemd-logind[1454]: New session 83 of user core. Apr 16 01:30:13.318790 systemd[1]: Started session-83.scope - Session 83 of User core. Apr 16 01:30:15.762474 sshd[9109]: pam_unix(sshd:session): session closed for user core Apr 16 01:30:15.835282 systemd[1]: sshd@82-10.0.0.84:22-10.0.0.1:32786.service: Deactivated successfully. Apr 16 01:30:15.866240 systemd[1]: session-83.scope: Deactivated successfully. Apr 16 01:30:15.883841 systemd[1]: session-83.scope: Consumed 1.145s CPU time. Apr 16 01:30:15.885306 systemd-logind[1454]: Session 83 logged out. Waiting for processes to exit. Apr 16 01:30:15.891343 systemd-logind[1454]: Removed session 83. Apr 16 01:30:15.913806 systemd[1]: Started sshd@83-10.0.0.84:22-10.0.0.1:32790.service - OpenSSH per-connection server daemon (10.0.0.1:32790). Apr 16 01:30:16.312133 sshd[9123]: Accepted publickey for core from 10.0.0.1 port 32790 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:30:16.320277 sshd[9123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:30:16.359053 systemd-logind[1454]: New session 84 of user core. Apr 16 01:30:16.455311 systemd[1]: Started session-84.scope - Session 84 of User core. Apr 16 01:30:17.755365 kubelet[2548]: E0416 01:30:17.754257 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:30:20.716461 sshd[9123]: pam_unix(sshd:session): session closed for user core Apr 16 01:30:20.777146 systemd[1]: sshd@83-10.0.0.84:22-10.0.0.1:32790.service: Deactivated successfully. Apr 16 01:30:20.798716 systemd[1]: session-84.scope: Deactivated successfully. Apr 16 01:30:20.799359 systemd[1]: session-84.scope: Consumed 2.199s CPU time. Apr 16 01:30:20.817924 systemd-logind[1454]: Session 84 logged out. Waiting for processes to exit. Apr 16 01:30:20.862339 systemd[1]: Started sshd@84-10.0.0.84:22-10.0.0.1:59428.service - OpenSSH per-connection server daemon (10.0.0.1:59428). Apr 16 01:30:20.909714 systemd-logind[1454]: Removed session 84. Apr 16 01:30:21.111707 sshd[9147]: Accepted publickey for core from 10.0.0.1 port 59428 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:30:21.121847 sshd[9147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:30:21.218194 systemd-logind[1454]: New session 85 of user core. Apr 16 01:30:21.240766 systemd[1]: Started session-85.scope - Session 85 of User core. Apr 16 01:30:22.863661 sshd[9147]: pam_unix(sshd:session): session closed for user core Apr 16 01:30:22.906306 systemd[1]: sshd@84-10.0.0.84:22-10.0.0.1:59428.service: Deactivated successfully. Apr 16 01:30:22.922195 systemd[1]: session-85.scope: Deactivated successfully. Apr 16 01:30:22.922603 systemd[1]: session-85.scope: Consumed 1.031s CPU time. Apr 16 01:30:22.931501 systemd-logind[1454]: Session 85 logged out. Waiting for processes to exit. Apr 16 01:30:22.985567 systemd[1]: Started sshd@85-10.0.0.84:22-10.0.0.1:59440.service - OpenSSH per-connection server daemon (10.0.0.1:59440). Apr 16 01:30:22.994472 systemd-logind[1454]: Removed session 85. Apr 16 01:30:23.260580 sshd[9166]: Accepted publickey for core from 10.0.0.1 port 59440 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:30:23.263722 sshd[9166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:30:23.352943 systemd-logind[1454]: New session 86 of user core. Apr 16 01:30:23.392736 systemd[1]: Started session-86.scope - Session 86 of User core. Apr 16 01:30:24.021124 sshd[9166]: pam_unix(sshd:session): session closed for user core Apr 16 01:30:24.035138 systemd[1]: sshd@85-10.0.0.84:22-10.0.0.1:59440.service: Deactivated successfully. Apr 16 01:30:24.063008 systemd[1]: session-86.scope: Deactivated successfully. Apr 16 01:30:24.068365 systemd-logind[1454]: Session 86 logged out. Waiting for processes to exit. Apr 16 01:30:24.142605 systemd-logind[1454]: Removed session 86. Apr 16 01:30:26.732190 kubelet[2548]: E0416 01:30:26.732034 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:30:29.110428 systemd[1]: Started sshd@86-10.0.0.84:22-10.0.0.1:59450.service - OpenSSH per-connection server daemon (10.0.0.1:59450). Apr 16 01:30:29.358630 sshd[9180]: Accepted publickey for core from 10.0.0.1 port 59450 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:30:29.361851 sshd[9180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:30:29.390915 systemd-logind[1454]: New session 87 of user core. Apr 16 01:30:29.413958 systemd[1]: Started session-87.scope - Session 87 of User core. Apr 16 01:30:29.902431 sshd[9180]: pam_unix(sshd:session): session closed for user core Apr 16 01:30:29.911607 systemd[1]: sshd@86-10.0.0.84:22-10.0.0.1:59450.service: Deactivated successfully. Apr 16 01:30:29.942602 systemd[1]: session-87.scope: Deactivated successfully. Apr 16 01:30:29.958145 systemd-logind[1454]: Session 87 logged out. Waiting for processes to exit. Apr 16 01:30:29.969023 systemd-logind[1454]: Removed session 87. Apr 16 01:30:35.030663 systemd[1]: Started sshd@87-10.0.0.84:22-10.0.0.1:37634.service - OpenSSH per-connection server daemon (10.0.0.1:37634). Apr 16 01:30:35.323763 sshd[9232]: Accepted publickey for core from 10.0.0.1 port 37634 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:30:35.329662 sshd[9232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:30:35.409612 systemd-logind[1454]: New session 88 of user core. Apr 16 01:30:35.427787 systemd[1]: Started session-88.scope - Session 88 of User core. Apr 16 01:30:36.608358 sshd[9232]: pam_unix(sshd:session): session closed for user core Apr 16 01:30:36.629508 systemd[1]: sshd@87-10.0.0.84:22-10.0.0.1:37634.service: Deactivated successfully. Apr 16 01:30:36.682168 systemd[1]: session-88.scope: Deactivated successfully. Apr 16 01:30:36.687349 systemd-logind[1454]: Session 88 logged out. Waiting for processes to exit. Apr 16 01:30:36.707240 systemd-logind[1454]: Removed session 88. Apr 16 01:30:38.728249 kubelet[2548]: E0416 01:30:38.728154 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:30:41.782324 systemd[1]: Started sshd@88-10.0.0.84:22-10.0.0.1:45344.service - OpenSSH per-connection server daemon (10.0.0.1:45344). Apr 16 01:30:42.202554 sshd[9296]: Accepted publickey for core from 10.0.0.1 port 45344 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:30:42.206353 sshd[9296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:30:42.218572 systemd-logind[1454]: New session 89 of user core. Apr 16 01:30:42.231779 systemd[1]: Started session-89.scope - Session 89 of User core. Apr 16 01:30:42.980509 sshd[9296]: pam_unix(sshd:session): session closed for user core Apr 16 01:30:43.000504 systemd[1]: sshd@88-10.0.0.84:22-10.0.0.1:45344.service: Deactivated successfully. Apr 16 01:30:43.019808 systemd[1]: session-89.scope: Deactivated successfully. Apr 16 01:30:43.022221 systemd-logind[1454]: Session 89 logged out. Waiting for processes to exit. Apr 16 01:30:43.023911 systemd-logind[1454]: Removed session 89. Apr 16 01:30:47.733775 kubelet[2548]: E0416 01:30:47.733161 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:30:48.048437 systemd[1]: Started sshd@89-10.0.0.84:22-10.0.0.1:45352.service - OpenSSH per-connection server daemon (10.0.0.1:45352). Apr 16 01:30:48.318517 sshd[9324]: Accepted publickey for core from 10.0.0.1 port 45352 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:30:48.322105 sshd[9324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:30:48.451086 systemd-logind[1454]: New session 90 of user core. Apr 16 01:30:48.480516 systemd[1]: Started session-90.scope - Session 90 of User core. Apr 16 01:30:49.487889 sshd[9324]: pam_unix(sshd:session): session closed for user core Apr 16 01:30:49.498627 systemd[1]: sshd@89-10.0.0.84:22-10.0.0.1:45352.service: Deactivated successfully. Apr 16 01:30:49.517771 systemd[1]: session-90.scope: Deactivated successfully. Apr 16 01:30:49.526753 systemd-logind[1454]: Session 90 logged out. Waiting for processes to exit. Apr 16 01:30:49.528954 systemd-logind[1454]: Removed session 90. Apr 16 01:30:54.611564 systemd[1]: Started sshd@90-10.0.0.84:22-10.0.0.1:38128.service - OpenSSH per-connection server daemon (10.0.0.1:38128). Apr 16 01:30:54.924352 sshd[9340]: Accepted publickey for core from 10.0.0.1 port 38128 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:30:54.953282 sshd[9340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:30:55.060763 systemd-logind[1454]: New session 91 of user core. Apr 16 01:30:55.089011 systemd[1]: Started session-91.scope - Session 91 of User core. Apr 16 01:30:55.900091 sshd[9340]: pam_unix(sshd:session): session closed for user core Apr 16 01:30:55.936109 systemd[1]: sshd@90-10.0.0.84:22-10.0.0.1:38128.service: Deactivated successfully. Apr 16 01:30:56.018055 systemd[1]: session-91.scope: Deactivated successfully. Apr 16 01:30:56.022564 systemd-logind[1454]: Session 91 logged out. Waiting for processes to exit. Apr 16 01:30:56.047785 systemd-logind[1454]: Removed session 91. Apr 16 01:31:01.019636 systemd[1]: Started sshd@91-10.0.0.84:22-10.0.0.1:51498.service - OpenSSH per-connection server daemon (10.0.0.1:51498). Apr 16 01:31:01.167350 sshd[9356]: Accepted publickey for core from 10.0.0.1 port 51498 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:31:01.199643 sshd[9356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:31:01.222810 systemd-logind[1454]: New session 92 of user core. Apr 16 01:31:01.233037 systemd[1]: Started session-92.scope - Session 92 of User core. Apr 16 01:31:01.853074 sshd[9356]: pam_unix(sshd:session): session closed for user core Apr 16 01:31:01.862920 systemd[1]: sshd@91-10.0.0.84:22-10.0.0.1:51498.service: Deactivated successfully. Apr 16 01:31:01.924555 systemd[1]: session-92.scope: Deactivated successfully. Apr 16 01:31:01.944462 systemd-logind[1454]: Session 92 logged out. Waiting for processes to exit. Apr 16 01:31:01.954339 systemd-logind[1454]: Removed session 92. Apr 16 01:31:06.728787 kubelet[2548]: E0416 01:31:06.728417 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:31:06.931193 systemd[1]: Started sshd@92-10.0.0.84:22-10.0.0.1:51514.service - OpenSSH per-connection server daemon (10.0.0.1:51514). Apr 16 01:31:07.108268 sshd[9454]: Accepted publickey for core from 10.0.0.1 port 51514 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:31:07.112201 sshd[9454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:31:07.132003 systemd-logind[1454]: New session 93 of user core. Apr 16 01:31:07.144895 systemd[1]: Started session-93.scope - Session 93 of User core. Apr 16 01:31:07.956925 sshd[9454]: pam_unix(sshd:session): session closed for user core Apr 16 01:31:07.977652 systemd[1]: sshd@92-10.0.0.84:22-10.0.0.1:51514.service: Deactivated successfully. Apr 16 01:31:08.009032 systemd[1]: session-93.scope: Deactivated successfully. Apr 16 01:31:08.011047 systemd-logind[1454]: Session 93 logged out. Waiting for processes to exit. Apr 16 01:31:08.014264 systemd-logind[1454]: Removed session 93. Apr 16 01:31:13.047556 systemd[1]: Started sshd@93-10.0.0.84:22-10.0.0.1:42570.service - OpenSSH per-connection server daemon (10.0.0.1:42570). Apr 16 01:31:13.276390 sshd[9495]: Accepted publickey for core from 10.0.0.1 port 42570 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:31:13.280946 sshd[9495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:31:13.320777 systemd-logind[1454]: New session 94 of user core. Apr 16 01:31:13.368063 systemd[1]: Started session-94.scope - Session 94 of User core. Apr 16 01:31:13.923150 sshd[9495]: pam_unix(sshd:session): session closed for user core Apr 16 01:31:13.933590 systemd[1]: sshd@93-10.0.0.84:22-10.0.0.1:42570.service: Deactivated successfully. Apr 16 01:31:13.939892 systemd[1]: session-94.scope: Deactivated successfully. Apr 16 01:31:13.941418 systemd-logind[1454]: Session 94 logged out. Waiting for processes to exit. Apr 16 01:31:13.943457 systemd-logind[1454]: Removed session 94. Apr 16 01:31:19.020313 systemd[1]: Started sshd@94-10.0.0.84:22-10.0.0.1:42584.service - OpenSSH per-connection server daemon (10.0.0.1:42584). Apr 16 01:31:19.359468 sshd[9510]: Accepted publickey for core from 10.0.0.1 port 42584 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:31:19.419289 sshd[9510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:31:19.518663 systemd-logind[1454]: New session 95 of user core. Apr 16 01:31:19.547858 systemd[1]: Started session-95.scope - Session 95 of User core. Apr 16 01:31:20.634093 sshd[9510]: pam_unix(sshd:session): session closed for user core Apr 16 01:31:20.743129 systemd[1]: sshd@94-10.0.0.84:22-10.0.0.1:42584.service: Deactivated successfully. Apr 16 01:31:20.764944 systemd[1]: session-95.scope: Deactivated successfully. Apr 16 01:31:20.767259 systemd-logind[1454]: Session 95 logged out. Waiting for processes to exit. Apr 16 01:31:20.825507 systemd-logind[1454]: Removed session 95. Apr 16 01:31:22.732170 kubelet[2548]: E0416 01:31:22.731717 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:31:25.839619 systemd[1]: Started sshd@95-10.0.0.84:22-10.0.0.1:48900.service - OpenSSH per-connection server daemon (10.0.0.1:48900). Apr 16 01:31:26.262091 sshd[9524]: Accepted publickey for core from 10.0.0.1 port 48900 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:31:26.295208 sshd[9524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:31:26.446191 systemd-logind[1454]: New session 96 of user core. Apr 16 01:31:26.482012 systemd[1]: Started session-96.scope - Session 96 of User core. Apr 16 01:31:27.731109 kubelet[2548]: E0416 01:31:27.731065 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:31:28.130018 sshd[9524]: pam_unix(sshd:session): session closed for user core Apr 16 01:31:28.240973 systemd[1]: sshd@95-10.0.0.84:22-10.0.0.1:48900.service: Deactivated successfully. Apr 16 01:31:28.276213 systemd[1]: session-96.scope: Deactivated successfully. Apr 16 01:31:28.293454 systemd-logind[1454]: Session 96 logged out. Waiting for processes to exit. Apr 16 01:31:28.330952 systemd-logind[1454]: Removed session 96. Apr 16 01:31:31.730648 kubelet[2548]: E0416 01:31:31.730428 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:31:33.236255 systemd[1]: Started sshd@96-10.0.0.84:22-10.0.0.1:35404.service - OpenSSH per-connection server daemon (10.0.0.1:35404). Apr 16 01:31:33.654232 sshd[9539]: Accepted publickey for core from 10.0.0.1 port 35404 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:31:33.667436 sshd[9539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:31:33.755322 systemd-logind[1454]: New session 97 of user core. Apr 16 01:31:33.840951 systemd[1]: Started session-97.scope - Session 97 of User core. Apr 16 01:31:33.884034 systemd[1]: run-containerd-runc-k8s.io-407adcfeb85f2ea58cc3cb0afdb7fe6b6fc4ac3074e5ae01309abf26a5edf351-runc.gZp6mK.mount: Deactivated successfully. Apr 16 01:31:34.892295 sshd[9539]: pam_unix(sshd:session): session closed for user core Apr 16 01:31:34.950956 systemd[1]: sshd@96-10.0.0.84:22-10.0.0.1:35404.service: Deactivated successfully. Apr 16 01:31:35.009150 systemd[1]: session-97.scope: Deactivated successfully. Apr 16 01:31:35.017395 systemd-logind[1454]: Session 97 logged out. Waiting for processes to exit. Apr 16 01:31:35.022420 systemd-logind[1454]: Removed session 97. Apr 16 01:31:36.733616 kubelet[2548]: E0416 01:31:36.733403 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:31:40.038659 systemd[1]: Started sshd@97-10.0.0.84:22-10.0.0.1:58280.service - OpenSSH per-connection server daemon (10.0.0.1:58280). Apr 16 01:31:40.342043 sshd[9597]: Accepted publickey for core from 10.0.0.1 port 58280 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:31:40.375779 sshd[9597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:31:40.436254 systemd-logind[1454]: New session 98 of user core. Apr 16 01:31:40.530703 systemd[1]: Started session-98.scope - Session 98 of User core. Apr 16 01:31:41.720930 sshd[9597]: pam_unix(sshd:session): session closed for user core Apr 16 01:31:41.738072 systemd[1]: sshd@97-10.0.0.84:22-10.0.0.1:58280.service: Deactivated successfully. Apr 16 01:31:41.760360 systemd[1]: session-98.scope: Deactivated successfully. Apr 16 01:31:41.807773 systemd-logind[1454]: Session 98 logged out. Waiting for processes to exit. Apr 16 01:31:41.812069 systemd-logind[1454]: Removed session 98. Apr 16 01:31:46.742902 systemd[1]: Started sshd@98-10.0.0.84:22-10.0.0.1:58296.service - OpenSSH per-connection server daemon (10.0.0.1:58296). Apr 16 01:31:46.909323 sshd[9634]: Accepted publickey for core from 10.0.0.1 port 58296 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:31:46.911449 sshd[9634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:31:47.015258 systemd-logind[1454]: New session 99 of user core. Apr 16 01:31:47.066423 systemd[1]: Started session-99.scope - Session 99 of User core. Apr 16 01:31:47.749877 kubelet[2548]: E0416 01:31:47.749303 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:31:48.189999 sshd[9634]: pam_unix(sshd:session): session closed for user core Apr 16 01:31:48.198225 systemd[1]: sshd@98-10.0.0.84:22-10.0.0.1:58296.service: Deactivated successfully. Apr 16 01:31:48.208267 systemd[1]: session-99.scope: Deactivated successfully. Apr 16 01:31:48.216437 systemd-logind[1454]: Session 99 logged out. Waiting for processes to exit. Apr 16 01:31:48.221518 systemd-logind[1454]: Removed session 99. Apr 16 01:31:53.330255 systemd[1]: Started sshd@99-10.0.0.84:22-10.0.0.1:36686.service - OpenSSH per-connection server daemon (10.0.0.1:36686). Apr 16 01:31:53.857051 sshd[9659]: Accepted publickey for core from 10.0.0.1 port 36686 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:31:53.936383 sshd[9659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:31:53.957057 systemd-logind[1454]: New session 100 of user core. Apr 16 01:31:53.976403 systemd[1]: Started session-100.scope - Session 100 of User core. Apr 16 01:31:54.578474 sshd[9659]: pam_unix(sshd:session): session closed for user core Apr 16 01:31:54.592584 systemd[1]: sshd@99-10.0.0.84:22-10.0.0.1:36686.service: Deactivated successfully. Apr 16 01:31:54.596849 systemd[1]: session-100.scope: Deactivated successfully. Apr 16 01:31:54.598219 systemd-logind[1454]: Session 100 logged out. Waiting for processes to exit. Apr 16 01:31:54.600517 systemd-logind[1454]: Removed session 100. Apr 16 01:31:59.692729 systemd[1]: Started sshd@100-10.0.0.84:22-10.0.0.1:33920.service - OpenSSH per-connection server daemon (10.0.0.1:33920). Apr 16 01:31:59.888241 sshd[9676]: Accepted publickey for core from 10.0.0.1 port 33920 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:31:59.894398 sshd[9676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:31:59.925242 systemd-logind[1454]: New session 101 of user core. Apr 16 01:31:59.942114 systemd[1]: Started session-101.scope - Session 101 of User core. Apr 16 01:32:00.791634 sshd[9676]: pam_unix(sshd:session): session closed for user core Apr 16 01:32:00.802543 systemd[1]: sshd@100-10.0.0.84:22-10.0.0.1:33920.service: Deactivated successfully. Apr 16 01:32:00.820876 systemd[1]: session-101.scope: Deactivated successfully. Apr 16 01:32:00.822301 systemd-logind[1454]: Session 101 logged out. Waiting for processes to exit. Apr 16 01:32:00.823627 systemd-logind[1454]: Removed session 101. Apr 16 01:32:02.301114 systemd[1]: run-containerd-runc-k8s.io-407adcfeb85f2ea58cc3cb0afdb7fe6b6fc4ac3074e5ae01309abf26a5edf351-runc.FZmf5x.mount: Deactivated successfully. Apr 16 01:32:05.891431 systemd[1]: Started sshd@101-10.0.0.84:22-10.0.0.1:33934.service - OpenSSH per-connection server daemon (10.0.0.1:33934). Apr 16 01:32:06.175601 sshd[9784]: Accepted publickey for core from 10.0.0.1 port 33934 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:32:06.212524 sshd[9784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:32:06.324212 systemd-logind[1454]: New session 102 of user core. Apr 16 01:32:06.359968 systemd[1]: Started session-102.scope - Session 102 of User core. Apr 16 01:32:07.517612 sshd[9784]: pam_unix(sshd:session): session closed for user core Apr 16 01:32:07.556403 systemd[1]: sshd@101-10.0.0.84:22-10.0.0.1:33934.service: Deactivated successfully. Apr 16 01:32:07.623847 systemd[1]: session-102.scope: Deactivated successfully. Apr 16 01:32:07.632567 systemd-logind[1454]: Session 102 logged out. Waiting for processes to exit. Apr 16 01:32:07.644127 systemd-logind[1454]: Removed session 102. Apr 16 01:32:08.765336 kubelet[2548]: E0416 01:32:08.765278 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:32:11.740486 kubelet[2548]: E0416 01:32:11.738494 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:32:12.609916 systemd[1]: Started sshd@102-10.0.0.84:22-10.0.0.1:56090.service - OpenSSH per-connection server daemon (10.0.0.1:56090). Apr 16 01:32:12.868363 sshd[9837]: Accepted publickey for core from 10.0.0.1 port 56090 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:32:12.970308 sshd[9837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:32:13.023343 systemd-logind[1454]: New session 103 of user core. Apr 16 01:32:13.041995 systemd[1]: Started session-103.scope - Session 103 of User core. Apr 16 01:32:14.466929 sshd[9837]: pam_unix(sshd:session): session closed for user core Apr 16 01:32:14.518449 systemd[1]: sshd@102-10.0.0.84:22-10.0.0.1:56090.service: Deactivated successfully. Apr 16 01:32:14.552474 systemd[1]: session-103.scope: Deactivated successfully. Apr 16 01:32:14.647205 systemd-logind[1454]: Session 103 logged out. Waiting for processes to exit. Apr 16 01:32:14.667901 systemd-logind[1454]: Removed session 103. Apr 16 01:32:19.552444 systemd[1]: Started sshd@103-10.0.0.84:22-10.0.0.1:50730.service - OpenSSH per-connection server daemon (10.0.0.1:50730). Apr 16 01:32:19.927208 sshd[9867]: Accepted publickey for core from 10.0.0.1 port 50730 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:32:19.935729 sshd[9867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:32:20.007214 systemd-logind[1454]: New session 104 of user core. Apr 16 01:32:20.053267 systemd[1]: Started session-104.scope - Session 104 of User core. Apr 16 01:32:21.183905 sshd[9867]: pam_unix(sshd:session): session closed for user core Apr 16 01:32:21.200949 systemd[1]: sshd@103-10.0.0.84:22-10.0.0.1:50730.service: Deactivated successfully. Apr 16 01:32:21.218117 systemd[1]: session-104.scope: Deactivated successfully. Apr 16 01:32:21.235371 systemd-logind[1454]: Session 104 logged out. Waiting for processes to exit. Apr 16 01:32:21.240143 systemd-logind[1454]: Removed session 104. Apr 16 01:32:26.317415 systemd[1]: Started sshd@104-10.0.0.84:22-10.0.0.1:50742.service - OpenSSH per-connection server daemon (10.0.0.1:50742). Apr 16 01:32:26.738819 sshd[9881]: Accepted publickey for core from 10.0.0.1 port 50742 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:32:26.765362 sshd[9881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:32:26.843858 systemd-logind[1454]: New session 105 of user core. Apr 16 01:32:26.896411 systemd[1]: Started session-105.scope - Session 105 of User core. Apr 16 01:32:27.983663 sshd[9881]: pam_unix(sshd:session): session closed for user core Apr 16 01:32:28.005037 systemd[1]: sshd@104-10.0.0.84:22-10.0.0.1:50742.service: Deactivated successfully. Apr 16 01:32:28.059773 systemd[1]: session-105.scope: Deactivated successfully. Apr 16 01:32:28.061200 systemd-logind[1454]: Session 105 logged out. Waiting for processes to exit. Apr 16 01:32:28.117579 systemd-logind[1454]: Removed session 105. Apr 16 01:32:32.764268 kubelet[2548]: E0416 01:32:32.764097 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:32:33.127275 systemd[1]: Started sshd@105-10.0.0.84:22-10.0.0.1:40748.service - OpenSSH per-connection server daemon (10.0.0.1:40748). Apr 16 01:32:33.544481 sshd[9897]: Accepted publickey for core from 10.0.0.1 port 40748 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:32:33.588160 sshd[9897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:32:33.672215 systemd-logind[1454]: New session 106 of user core. Apr 16 01:32:33.735775 systemd[1]: Started session-106.scope - Session 106 of User core. Apr 16 01:32:35.139795 sshd[9897]: pam_unix(sshd:session): session closed for user core Apr 16 01:32:35.231159 systemd[1]: sshd@105-10.0.0.84:22-10.0.0.1:40748.service: Deactivated successfully. Apr 16 01:32:35.267383 systemd[1]: session-106.scope: Deactivated successfully. Apr 16 01:32:35.302022 systemd-logind[1454]: Session 106 logged out. Waiting for processes to exit. Apr 16 01:32:35.352379 systemd-logind[1454]: Removed session 106. Apr 16 01:32:40.191208 systemd[1]: Started sshd@106-10.0.0.84:22-10.0.0.1:54616.service - OpenSSH per-connection server daemon (10.0.0.1:54616). Apr 16 01:32:40.438290 sshd[9957]: Accepted publickey for core from 10.0.0.1 port 54616 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:32:40.494663 sshd[9957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:32:40.725086 systemd-logind[1454]: New session 107 of user core. Apr 16 01:32:40.735184 systemd[1]: Started session-107.scope - Session 107 of User core. Apr 16 01:32:41.240387 systemd[1]: run-containerd-runc-k8s.io-2fb7a90a7d3892c694c77b2c85fd0450372c54fa7f4c56bffedc6bca89c85d93-runc.4gl9aW.mount: Deactivated successfully. Apr 16 01:32:41.907603 sshd[9957]: pam_unix(sshd:session): session closed for user core Apr 16 01:32:41.936913 systemd[1]: sshd@106-10.0.0.84:22-10.0.0.1:54616.service: Deactivated successfully. Apr 16 01:32:42.025415 systemd[1]: session-107.scope: Deactivated successfully. Apr 16 01:32:42.051308 systemd-logind[1454]: Session 107 logged out. Waiting for processes to exit. Apr 16 01:32:42.066321 systemd-logind[1454]: Removed session 107. Apr 16 01:32:44.738528 kubelet[2548]: E0416 01:32:44.737114 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:32:47.012811 systemd[1]: Started sshd@107-10.0.0.84:22-10.0.0.1:54618.service - OpenSSH per-connection server daemon (10.0.0.1:54618). Apr 16 01:32:47.268402 sshd[9993]: Accepted publickey for core from 10.0.0.1 port 54618 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:32:47.329379 sshd[9993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:32:47.415510 systemd-logind[1454]: New session 108 of user core. Apr 16 01:32:47.471173 systemd[1]: Started session-108.scope - Session 108 of User core. Apr 16 01:32:48.539732 sshd[9993]: pam_unix(sshd:session): session closed for user core Apr 16 01:32:48.567267 systemd[1]: sshd@107-10.0.0.84:22-10.0.0.1:54618.service: Deactivated successfully. Apr 16 01:32:48.609770 systemd[1]: session-108.scope: Deactivated successfully. Apr 16 01:32:48.611610 systemd-logind[1454]: Session 108 logged out. Waiting for processes to exit. Apr 16 01:32:48.616102 systemd-logind[1454]: Removed session 108. Apr 16 01:32:52.744177 kubelet[2548]: E0416 01:32:52.741839 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:32:53.707902 systemd[1]: Started sshd@108-10.0.0.84:22-10.0.0.1:33578.service - OpenSSH per-connection server daemon (10.0.0.1:33578). Apr 16 01:32:54.110419 sshd[10007]: Accepted publickey for core from 10.0.0.1 port 33578 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:32:54.158075 sshd[10007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:32:54.260469 systemd-logind[1454]: New session 109 of user core. Apr 16 01:32:54.317145 systemd[1]: Started session-109.scope - Session 109 of User core. Apr 16 01:32:55.600582 sshd[10007]: pam_unix(sshd:session): session closed for user core Apr 16 01:32:55.634549 systemd[1]: sshd@108-10.0.0.84:22-10.0.0.1:33578.service: Deactivated successfully. Apr 16 01:32:55.654380 systemd[1]: session-109.scope: Deactivated successfully. Apr 16 01:32:55.656866 systemd-logind[1454]: Session 109 logged out. Waiting for processes to exit. Apr 16 01:32:55.668337 systemd-logind[1454]: Removed session 109. Apr 16 01:32:56.760647 kubelet[2548]: E0416 01:32:56.758914 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:32:58.728606 kubelet[2548]: E0416 01:32:58.727926 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:33:00.750261 systemd[1]: Started sshd@109-10.0.0.84:22-10.0.0.1:57644.service - OpenSSH per-connection server daemon (10.0.0.1:57644). Apr 16 01:33:01.125778 sshd[10024]: Accepted publickey for core from 10.0.0.1 port 57644 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:33:01.141239 sshd[10024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:33:01.180777 systemd-logind[1454]: New session 110 of user core. Apr 16 01:33:01.198428 systemd[1]: Started session-110.scope - Session 110 of User core. Apr 16 01:33:02.339383 sshd[10024]: pam_unix(sshd:session): session closed for user core Apr 16 01:33:02.433083 systemd[1]: sshd@109-10.0.0.84:22-10.0.0.1:57644.service: Deactivated successfully. Apr 16 01:33:02.441554 systemd[1]: session-110.scope: Deactivated successfully. Apr 16 01:33:02.463299 systemd-logind[1454]: Session 110 logged out. Waiting for processes to exit. Apr 16 01:33:02.524215 systemd-logind[1454]: Removed session 110. Apr 16 01:33:07.469824 systemd[1]: Started sshd@110-10.0.0.84:22-10.0.0.1:57646.service - OpenSSH per-connection server daemon (10.0.0.1:57646). Apr 16 01:33:07.836713 sshd[10124]: Accepted publickey for core from 10.0.0.1 port 57646 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:33:07.852436 sshd[10124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:33:07.941620 systemd-logind[1454]: New session 111 of user core. Apr 16 01:33:08.029808 systemd[1]: Started session-111.scope - Session 111 of User core. Apr 16 01:33:09.368117 sshd[10124]: pam_unix(sshd:session): session closed for user core Apr 16 01:33:09.398106 systemd[1]: sshd@110-10.0.0.84:22-10.0.0.1:57646.service: Deactivated successfully. Apr 16 01:33:09.435187 systemd[1]: session-111.scope: Deactivated successfully. Apr 16 01:33:09.437498 systemd-logind[1454]: Session 111 logged out. Waiting for processes to exit. Apr 16 01:33:09.441502 systemd-logind[1454]: Removed session 111. Apr 16 01:33:11.746118 kubelet[2548]: E0416 01:33:11.743266 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:33:14.535712 systemd[1]: Started sshd@111-10.0.0.84:22-10.0.0.1:48318.service - OpenSSH per-connection server daemon (10.0.0.1:48318). Apr 16 01:33:14.822806 sshd[10163]: Accepted publickey for core from 10.0.0.1 port 48318 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:33:14.834353 sshd[10163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:33:14.944126 systemd-logind[1454]: New session 112 of user core. Apr 16 01:33:14.966957 systemd[1]: Started session-112.scope - Session 112 of User core. Apr 16 01:33:16.119013 sshd[10163]: pam_unix(sshd:session): session closed for user core Apr 16 01:33:16.128932 systemd[1]: sshd@111-10.0.0.84:22-10.0.0.1:48318.service: Deactivated successfully. Apr 16 01:33:16.160641 systemd[1]: session-112.scope: Deactivated successfully. Apr 16 01:33:16.170012 systemd-logind[1454]: Session 112 logged out. Waiting for processes to exit. Apr 16 01:33:16.227938 systemd-logind[1454]: Removed session 112. Apr 16 01:33:21.212308 systemd[1]: Started sshd@112-10.0.0.84:22-10.0.0.1:50846.service - OpenSSH per-connection server daemon (10.0.0.1:50846). Apr 16 01:33:21.509277 sshd[10177]: Accepted publickey for core from 10.0.0.1 port 50846 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:33:21.519997 sshd[10177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:33:21.605441 systemd-logind[1454]: New session 113 of user core. Apr 16 01:33:21.626898 systemd[1]: Started session-113.scope - Session 113 of User core. Apr 16 01:33:22.522013 sshd[10177]: pam_unix(sshd:session): session closed for user core Apr 16 01:33:22.569049 systemd[1]: sshd@112-10.0.0.84:22-10.0.0.1:50846.service: Deactivated successfully. Apr 16 01:33:22.572660 systemd[1]: session-113.scope: Deactivated successfully. Apr 16 01:33:22.576139 systemd-logind[1454]: Session 113 logged out. Waiting for processes to exit. Apr 16 01:33:22.577496 systemd-logind[1454]: Removed session 113. Apr 16 01:33:24.733820 kubelet[2548]: E0416 01:33:24.733395 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:33:27.707880 systemd[1]: Started sshd@113-10.0.0.84:22-10.0.0.1:50854.service - OpenSSH per-connection server daemon (10.0.0.1:50854). Apr 16 01:33:27.847185 sshd[10191]: Accepted publickey for core from 10.0.0.1 port 50854 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:33:27.922595 sshd[10191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:33:27.948550 systemd-logind[1454]: New session 114 of user core. Apr 16 01:33:27.957000 systemd[1]: Started session-114.scope - Session 114 of User core. Apr 16 01:33:28.513120 sshd[10191]: pam_unix(sshd:session): session closed for user core Apr 16 01:33:28.537630 systemd[1]: sshd@113-10.0.0.84:22-10.0.0.1:50854.service: Deactivated successfully. Apr 16 01:33:28.554814 systemd[1]: session-114.scope: Deactivated successfully. Apr 16 01:33:28.558039 systemd-logind[1454]: Session 114 logged out. Waiting for processes to exit. Apr 16 01:33:28.648281 systemd-logind[1454]: Removed session 114. Apr 16 01:33:33.638324 systemd[1]: Started sshd@114-10.0.0.84:22-10.0.0.1:46452.service - OpenSSH per-connection server daemon (10.0.0.1:46452). Apr 16 01:33:33.867461 sshd[10205]: Accepted publickey for core from 10.0.0.1 port 46452 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:33:33.936564 sshd[10205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:33:34.036448 systemd-logind[1454]: New session 115 of user core. Apr 16 01:33:34.049899 systemd[1]: Started session-115.scope - Session 115 of User core. Apr 16 01:33:34.752472 sshd[10205]: pam_unix(sshd:session): session closed for user core Apr 16 01:33:34.832358 systemd[1]: sshd@114-10.0.0.84:22-10.0.0.1:46452.service: Deactivated successfully. Apr 16 01:33:34.843519 systemd[1]: session-115.scope: Deactivated successfully. Apr 16 01:33:34.854449 systemd-logind[1454]: Session 115 logged out. Waiting for processes to exit. Apr 16 01:33:34.893045 systemd-logind[1454]: Removed session 115. Apr 16 01:33:37.733327 kubelet[2548]: E0416 01:33:37.732331 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:33:39.831827 systemd[1]: Started sshd@115-10.0.0.84:22-10.0.0.1:51120.service - OpenSSH per-connection server daemon (10.0.0.1:51120). Apr 16 01:33:40.062214 sshd[10281]: Accepted publickey for core from 10.0.0.1 port 51120 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:33:40.127507 sshd[10281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:33:40.166118 systemd-logind[1454]: New session 116 of user core. Apr 16 01:33:40.175771 systemd[1]: Started session-116.scope - Session 116 of User core. Apr 16 01:33:40.964179 sshd[10281]: pam_unix(sshd:session): session closed for user core Apr 16 01:33:41.010208 systemd[1]: sshd@115-10.0.0.84:22-10.0.0.1:51120.service: Deactivated successfully. Apr 16 01:33:41.015535 systemd[1]: session-116.scope: Deactivated successfully. Apr 16 01:33:41.052346 systemd[1]: run-containerd-runc-k8s.io-2fb7a90a7d3892c694c77b2c85fd0450372c54fa7f4c56bffedc6bca89c85d93-runc.QKweBr.mount: Deactivated successfully. Apr 16 01:33:41.144069 systemd-logind[1454]: Session 116 logged out. Waiting for processes to exit. Apr 16 01:33:41.158535 systemd-logind[1454]: Removed session 116. Apr 16 01:33:46.037153 systemd[1]: Started sshd@116-10.0.0.84:22-10.0.0.1:51134.service - OpenSSH per-connection server daemon (10.0.0.1:51134). Apr 16 01:33:46.359267 sshd[10318]: Accepted publickey for core from 10.0.0.1 port 51134 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:33:46.393983 sshd[10318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:33:46.479424 systemd-logind[1454]: New session 117 of user core. Apr 16 01:33:46.501221 systemd[1]: Started session-117.scope - Session 117 of User core. Apr 16 01:33:46.753320 kubelet[2548]: E0416 01:33:46.752209 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:33:47.091256 sshd[10318]: pam_unix(sshd:session): session closed for user core Apr 16 01:33:47.101518 systemd[1]: sshd@116-10.0.0.84:22-10.0.0.1:51134.service: Deactivated successfully. Apr 16 01:33:47.118089 systemd[1]: session-117.scope: Deactivated successfully. Apr 16 01:33:47.120138 systemd-logind[1454]: Session 117 logged out. Waiting for processes to exit. Apr 16 01:33:47.121587 systemd-logind[1454]: Removed session 117. Apr 16 01:33:52.200779 systemd[1]: Started sshd@117-10.0.0.84:22-10.0.0.1:60230.service - OpenSSH per-connection server daemon (10.0.0.1:60230). Apr 16 01:33:52.242783 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 16 01:33:52.532623 sshd[10348]: Accepted publickey for core from 10.0.0.1 port 60230 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:33:52.546490 sshd[10348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:33:52.607449 systemd-logind[1454]: New session 118 of user core. Apr 16 01:33:52.616723 systemd[1]: Started session-118.scope - Session 118 of User core. Apr 16 01:33:52.616924 systemd-tmpfiles[10349]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 01:33:52.620992 systemd-tmpfiles[10349]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 01:33:52.632058 systemd-tmpfiles[10349]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 01:33:52.637285 systemd-tmpfiles[10349]: ACLs are not supported, ignoring. Apr 16 01:33:52.637405 systemd-tmpfiles[10349]: ACLs are not supported, ignoring. Apr 16 01:33:52.710892 systemd-tmpfiles[10349]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 01:33:52.713178 systemd-tmpfiles[10349]: Skipping /boot Apr 16 01:33:52.835199 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 16 01:33:52.838392 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 16 01:33:54.258720 sshd[10348]: pam_unix(sshd:session): session closed for user core Apr 16 01:33:54.337434 systemd[1]: sshd@117-10.0.0.84:22-10.0.0.1:60230.service: Deactivated successfully. Apr 16 01:33:54.359325 systemd[1]: session-118.scope: Deactivated successfully. Apr 16 01:33:54.418896 systemd-logind[1454]: Session 118 logged out. Waiting for processes to exit. Apr 16 01:33:54.441183 systemd-logind[1454]: Removed session 118. Apr 16 01:33:58.748945 kubelet[2548]: E0416 01:33:58.745890 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:33:59.300317 systemd[1]: Started sshd@118-10.0.0.84:22-10.0.0.1:60244.service - OpenSSH per-connection server daemon (10.0.0.1:60244). Apr 16 01:33:59.548772 sshd[10367]: Accepted publickey for core from 10.0.0.1 port 60244 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:33:59.558075 sshd[10367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:33:59.698544 systemd-logind[1454]: New session 119 of user core. Apr 16 01:33:59.766275 systemd[1]: Started session-119.scope - Session 119 of User core. Apr 16 01:34:00.942084 sshd[10367]: pam_unix(sshd:session): session closed for user core Apr 16 01:34:01.030322 systemd[1]: sshd@118-10.0.0.84:22-10.0.0.1:60244.service: Deactivated successfully. Apr 16 01:34:01.070616 systemd[1]: session-119.scope: Deactivated successfully. Apr 16 01:34:01.148156 systemd-logind[1454]: Session 119 logged out. Waiting for processes to exit. Apr 16 01:34:01.157202 systemd-logind[1454]: Removed session 119. Apr 16 01:34:06.093264 systemd[1]: Started sshd@119-10.0.0.84:22-10.0.0.1:58244.service - OpenSSH per-connection server daemon (10.0.0.1:58244). Apr 16 01:34:06.799408 sshd[10469]: Accepted publickey for core from 10.0.0.1 port 58244 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:34:06.824372 sshd[10469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:34:06.967102 systemd-logind[1454]: New session 120 of user core. Apr 16 01:34:07.038869 systemd[1]: Started session-120.scope - Session 120 of User core. Apr 16 01:34:08.753056 kubelet[2548]: E0416 01:34:08.752576 2548 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 01:34:09.726231 sshd[10469]: pam_unix(sshd:session): session closed for user core Apr 16 01:34:09.839912 systemd[1]: sshd@119-10.0.0.84:22-10.0.0.1:58244.service: Deactivated successfully. Apr 16 01:34:09.876257 systemd[1]: session-120.scope: Deactivated successfully. Apr 16 01:34:09.877967 systemd[1]: session-120.scope: Consumed 1.353s CPU time. Apr 16 01:34:09.887998 systemd-logind[1454]: Session 120 logged out. Waiting for processes to exit. Apr 16 01:34:09.903510 systemd-logind[1454]: Removed session 120. Apr 16 01:34:14.864319 systemd[1]: Started sshd@120-10.0.0.84:22-10.0.0.1:55562.service - OpenSSH per-connection server daemon (10.0.0.1:55562). Apr 16 01:34:15.145638 sshd[10506]: Accepted publickey for core from 10.0.0.1 port 55562 ssh2: RSA SHA256:SAlBXtH/8MHoG+sB9/uUf/4aPcwZq+D2Et7nJ5P/gD4 Apr 16 01:34:15.160551 sshd[10506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 01:34:15.226235 systemd-logind[1454]: New session 121 of user core. Apr 16 01:34:15.244665 systemd[1]: Started session-121.scope - Session 121 of User core. Apr 16 01:34:16.612996 sshd[10506]: pam_unix(sshd:session): session closed for user core Apr 16 01:34:16.617439 systemd[1]: sshd@120-10.0.0.84:22-10.0.0.1:55562.service: Deactivated successfully. Apr 16 01:34:16.647466 systemd[1]: session-121.scope: Deactivated successfully. Apr 16 01:34:16.701888 systemd-logind[1454]: Session 121 logged out. Waiting for processes to exit. Apr 16 01:34:16.710508 systemd-logind[1454]: Removed session 121.