May 8 00:39:19.907894 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:54:21 -00 2025 May 8 00:39:19.907916 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:39:19.907927 kernel: BIOS-provided physical RAM map: May 8 00:39:19.907933 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 8 00:39:19.907939 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 8 00:39:19.907945 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 8 00:39:19.907991 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 8 00:39:19.907999 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 8 00:39:19.908007 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 8 00:39:19.908015 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 8 00:39:19.908028 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 8 00:39:19.908036 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 8 00:39:19.908048 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 8 00:39:19.908054 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 8 00:39:19.908062 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 8 00:39:19.908068 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 8 00:39:19.908078 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 8 00:39:19.908085 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 8 00:39:19.908092 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 8 00:39:19.908098 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:39:19.908105 kernel: NX (Execute Disable) protection: active May 8 00:39:19.908111 kernel: APIC: Static calls initialized May 8 00:39:19.908118 kernel: efi: EFI v2.7 by EDK II May 8 00:39:19.908125 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 8 00:39:19.908131 kernel: SMBIOS 2.8 present. May 8 00:39:19.908138 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 8 00:39:19.908144 kernel: Hypervisor detected: KVM May 8 00:39:19.908154 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:39:19.908160 kernel: kvm-clock: using sched offset of 5660536026 cycles May 8 00:39:19.908167 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:39:19.908174 kernel: tsc: Detected 2794.748 MHz processor May 8 00:39:19.908181 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:39:19.908189 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:39:19.908195 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 8 00:39:19.908202 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 8 00:39:19.908209 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:39:19.908219 kernel: Using GB pages for direct mapping May 8 00:39:19.908225 kernel: Secure boot disabled May 8 00:39:19.908232 kernel: ACPI: Early table checksum verification disabled May 8 00:39:19.908239 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 8 00:39:19.908250 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 8 00:39:19.908257 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:19.908264 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:19.908274 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 8 00:39:19.908281 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:19.908290 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:19.908298 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:19.908305 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:39:19.908312 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 8 00:39:19.908319 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 8 00:39:19.908329 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 8 00:39:19.908336 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 8 00:39:19.908343 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 8 00:39:19.908349 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 8 00:39:19.908356 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 8 00:39:19.908363 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 8 00:39:19.908370 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 8 00:39:19.908377 kernel: No NUMA configuration found May 8 00:39:19.908386 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 8 00:39:19.908395 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 8 00:39:19.908402 kernel: Zone ranges: May 8 00:39:19.908409 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:39:19.908416 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 8 00:39:19.908423 kernel: Normal empty May 8 00:39:19.908430 kernel: Movable zone start for each node May 8 00:39:19.908437 kernel: Early memory node ranges May 8 00:39:19.908444 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 8 00:39:19.908451 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 8 00:39:19.908458 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 8 00:39:19.908467 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 8 00:39:19.908474 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 8 00:39:19.908481 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 8 00:39:19.908488 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 8 00:39:19.908495 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:39:19.908502 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 8 00:39:19.908509 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 8 00:39:19.908516 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:39:19.908522 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 8 00:39:19.908532 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 8 00:39:19.908539 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 8 00:39:19.908546 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:39:19.908553 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:39:19.908560 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:39:19.908567 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:39:19.908574 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:39:19.908581 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:39:19.908588 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:39:19.908598 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:39:19.908605 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:39:19.908611 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:39:19.908618 kernel: TSC deadline timer available May 8 00:39:19.908625 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 8 00:39:19.908632 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:39:19.908639 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:39:19.908646 kernel: kvm-guest: setup PV sched yield May 8 00:39:19.908653 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 8 00:39:19.908663 kernel: Booting paravirtualized kernel on KVM May 8 00:39:19.908670 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:39:19.908677 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 8 00:39:19.908684 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 May 8 00:39:19.908691 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 May 8 00:39:19.908698 kernel: pcpu-alloc: [0] 0 1 2 3 May 8 00:39:19.908705 kernel: kvm-guest: PV spinlocks enabled May 8 00:39:19.908712 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:39:19.908720 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:39:19.908733 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:39:19.908740 kernel: random: crng init done May 8 00:39:19.908747 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:39:19.908754 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:39:19.908761 kernel: Fallback order for Node 0: 0 May 8 00:39:19.908768 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 8 00:39:19.908775 kernel: Policy zone: DMA32 May 8 00:39:19.908782 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:39:19.908792 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42856K init, 2336K bss, 166140K reserved, 0K cma-reserved) May 8 00:39:19.908799 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:39:19.908806 kernel: ftrace: allocating 37944 entries in 149 pages May 8 00:39:19.908813 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:39:19.908821 kernel: Dynamic Preempt: voluntary May 8 00:39:19.908835 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:39:19.908846 kernel: rcu: RCU event tracing is enabled. May 8 00:39:19.908853 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:39:19.908861 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:39:19.908868 kernel: Rude variant of Tasks RCU enabled. May 8 00:39:19.908875 kernel: Tracing variant of Tasks RCU enabled. May 8 00:39:19.908882 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:39:19.908892 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:39:19.908899 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 8 00:39:19.908907 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:39:19.908914 kernel: Console: colour dummy device 80x25 May 8 00:39:19.908921 kernel: printk: console [ttyS0] enabled May 8 00:39:19.908931 kernel: ACPI: Core revision 20230628 May 8 00:39:19.908938 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:39:19.908946 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:39:19.908971 kernel: x2apic enabled May 8 00:39:19.908979 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:39:19.908986 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 8 00:39:19.908994 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 8 00:39:19.909001 kernel: kvm-guest: setup PV IPIs May 8 00:39:19.909008 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:39:19.909019 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:39:19.909026 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 8 00:39:19.909034 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:39:19.909041 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:39:19.909048 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:39:19.909056 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:39:19.909063 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:39:19.909071 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:39:19.909078 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:39:19.909088 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 8 00:39:19.909095 kernel: RETBleed: Mitigation: untrained return thunk May 8 00:39:19.909103 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:39:19.909110 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:39:19.909120 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 8 00:39:19.909128 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 8 00:39:19.909135 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 8 00:39:19.909143 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:39:19.909153 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:39:19.909160 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:39:19.909167 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:39:19.909175 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:39:19.909182 kernel: Freeing SMP alternatives memory: 32K May 8 00:39:19.909189 kernel: pid_max: default: 32768 minimum: 301 May 8 00:39:19.909196 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:39:19.909204 kernel: landlock: Up and running. May 8 00:39:19.909211 kernel: SELinux: Initializing. May 8 00:39:19.909221 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:39:19.909228 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:39:19.909236 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 8 00:39:19.909243 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:39:19.909250 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:39:19.909258 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:39:19.909265 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:39:19.909272 kernel: ... version: 0 May 8 00:39:19.909280 kernel: ... bit width: 48 May 8 00:39:19.909289 kernel: ... generic registers: 6 May 8 00:39:19.909297 kernel: ... value mask: 0000ffffffffffff May 8 00:39:19.909304 kernel: ... max period: 00007fffffffffff May 8 00:39:19.909311 kernel: ... fixed-purpose events: 0 May 8 00:39:19.909318 kernel: ... event mask: 000000000000003f May 8 00:39:19.909325 kernel: signal: max sigframe size: 1776 May 8 00:39:19.909333 kernel: rcu: Hierarchical SRCU implementation. May 8 00:39:19.909340 kernel: rcu: Max phase no-delay instances is 400. May 8 00:39:19.909347 kernel: smp: Bringing up secondary CPUs ... May 8 00:39:19.909357 kernel: smpboot: x86: Booting SMP configuration: May 8 00:39:19.909364 kernel: .... node #0, CPUs: #1 #2 #3 May 8 00:39:19.909371 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:39:19.909379 kernel: smpboot: Max logical packages: 1 May 8 00:39:19.909390 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 8 00:39:19.909404 kernel: devtmpfs: initialized May 8 00:39:19.909418 kernel: x86/mm: Memory block size: 128MB May 8 00:39:19.909432 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 8 00:39:19.909446 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 8 00:39:19.909463 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 8 00:39:19.909484 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 8 00:39:19.909498 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 8 00:39:19.909512 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:39:19.909529 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:39:19.909543 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:39:19.909556 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:39:19.909567 kernel: audit: initializing netlink subsys (disabled) May 8 00:39:19.909574 kernel: audit: type=2000 audit(1746664758.910:1): state=initialized audit_enabled=0 res=1 May 8 00:39:19.909584 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:39:19.909591 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:39:19.909598 kernel: cpuidle: using governor menu May 8 00:39:19.909606 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:39:19.909613 kernel: dca service started, version 1.12.1 May 8 00:39:19.909621 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:39:19.909628 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 8 00:39:19.909636 kernel: PCI: Using configuration type 1 for base access May 8 00:39:19.909643 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:39:19.909655 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:39:19.909663 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:39:19.909672 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:39:19.909680 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:39:19.909687 kernel: ACPI: Added _OSI(Module Device) May 8 00:39:19.909695 kernel: ACPI: Added _OSI(Processor Device) May 8 00:39:19.909702 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:39:19.909709 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:39:19.909716 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:39:19.909726 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:39:19.909733 kernel: ACPI: Interpreter enabled May 8 00:39:19.909740 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:39:19.909748 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:39:19.909755 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:39:19.909763 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:39:19.909770 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:39:19.909777 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:39:19.910104 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:39:19.910248 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:39:19.910374 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:39:19.910384 kernel: PCI host bridge to bus 0000:00 May 8 00:39:19.910526 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:39:19.910643 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:39:19.910756 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:39:19.910874 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 8 00:39:19.911023 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:39:19.911138 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 8 00:39:19.911249 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:39:19.911403 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:39:19.911555 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:39:19.911923 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 8 00:39:19.913145 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 8 00:39:19.913316 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 8 00:39:19.913444 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 8 00:39:19.913569 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:39:19.913720 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:39:19.913848 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 8 00:39:19.914039 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 8 00:39:19.914165 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 8 00:39:19.914307 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 8 00:39:19.914496 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 8 00:39:19.914644 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 8 00:39:19.914769 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 8 00:39:19.914916 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 8 00:39:19.915075 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 8 00:39:19.915204 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 8 00:39:19.915329 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 8 00:39:19.915453 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 8 00:39:19.915600 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:39:19.915726 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:39:19.915864 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:39:19.916071 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 8 00:39:19.916197 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 8 00:39:19.916336 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:39:19.916461 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 8 00:39:19.916471 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:39:19.916480 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:39:19.916488 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:39:19.916500 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:39:19.916508 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:39:19.916516 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:39:19.916524 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:39:19.916531 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:39:19.916539 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:39:19.916546 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:39:19.916554 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:39:19.916561 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:39:19.916569 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:39:19.916579 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:39:19.916587 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:39:19.916594 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:39:19.916602 kernel: iommu: Default domain type: Translated May 8 00:39:19.916609 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:39:19.916617 kernel: efivars: Registered efivars operations May 8 00:39:19.916624 kernel: PCI: Using ACPI for IRQ routing May 8 00:39:19.916632 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:39:19.916640 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 8 00:39:19.916650 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 8 00:39:19.916658 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 8 00:39:19.916665 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 8 00:39:19.916788 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:39:19.916911 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:39:19.917056 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:39:19.917068 kernel: vgaarb: loaded May 8 00:39:19.917075 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:39:19.917087 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:39:19.917095 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:39:19.917102 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:39:19.917111 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:39:19.917118 kernel: pnp: PnP ACPI init May 8 00:39:19.917264 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:39:19.917277 kernel: pnp: PnP ACPI: found 6 devices May 8 00:39:19.917285 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:39:19.917296 kernel: NET: Registered PF_INET protocol family May 8 00:39:19.917303 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:39:19.917311 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:39:19.917319 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:39:19.917327 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:39:19.917335 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:39:19.917342 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:39:19.917350 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:39:19.917357 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:39:19.917367 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:39:19.917375 kernel: NET: Registered PF_XDP protocol family May 8 00:39:19.917501 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 8 00:39:19.917628 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 8 00:39:19.917750 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:39:19.917865 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:39:19.918022 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:39:19.918137 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 8 00:39:19.918255 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:39:19.918366 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 8 00:39:19.918377 kernel: PCI: CLS 0 bytes, default 64 May 8 00:39:19.918384 kernel: Initialise system trusted keyrings May 8 00:39:19.918392 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:39:19.918400 kernel: Key type asymmetric registered May 8 00:39:19.918408 kernel: Asymmetric key parser 'x509' registered May 8 00:39:19.918415 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:39:19.918423 kernel: io scheduler mq-deadline registered May 8 00:39:19.918434 kernel: io scheduler kyber registered May 8 00:39:19.918442 kernel: io scheduler bfq registered May 8 00:39:19.918449 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:39:19.918458 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:39:19.918466 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:39:19.918473 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 8 00:39:19.918481 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:39:19.918489 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:39:19.918497 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:39:19.918507 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:39:19.918515 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:39:19.918523 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:39:19.918654 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 00:39:19.918772 kernel: rtc_cmos 00:04: registered as rtc0 May 8 00:39:19.918886 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T00:39:19 UTC (1746664759) May 8 00:39:19.919024 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:39:19.919035 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 00:39:19.919047 kernel: efifb: probing for efifb May 8 00:39:19.919055 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 8 00:39:19.919063 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 8 00:39:19.919070 kernel: efifb: scrolling: redraw May 8 00:39:19.919078 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 8 00:39:19.919086 kernel: Console: switching to colour frame buffer device 100x37 May 8 00:39:19.919112 kernel: fb0: EFI VGA frame buffer device May 8 00:39:19.919122 kernel: pstore: Using crash dump compression: deflate May 8 00:39:19.919130 kernel: pstore: Registered efi_pstore as persistent store backend May 8 00:39:19.919140 kernel: NET: Registered PF_INET6 protocol family May 8 00:39:19.919148 kernel: Segment Routing with IPv6 May 8 00:39:19.919156 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:39:19.919163 kernel: NET: Registered PF_PACKET protocol family May 8 00:39:19.919171 kernel: Key type dns_resolver registered May 8 00:39:19.919179 kernel: IPI shorthand broadcast: enabled May 8 00:39:19.919187 kernel: sched_clock: Marking stable (671009432, 115884380)->(809777852, -22884040) May 8 00:39:19.919195 kernel: registered taskstats version 1 May 8 00:39:19.919203 kernel: Loading compiled-in X.509 certificates May 8 00:39:19.919213 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 75e4e434c57439d3f2eaf7797bbbcdd698dafd0e' May 8 00:39:19.919221 kernel: Key type .fscrypt registered May 8 00:39:19.919229 kernel: Key type fscrypt-provisioning registered May 8 00:39:19.919236 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:39:19.919244 kernel: ima: Allocated hash algorithm: sha1 May 8 00:39:19.919252 kernel: ima: No architecture policies found May 8 00:39:19.919260 kernel: clk: Disabling unused clocks May 8 00:39:19.919267 kernel: Freeing unused kernel image (initmem) memory: 42856K May 8 00:39:19.919275 kernel: Write protecting the kernel read-only data: 36864k May 8 00:39:19.919286 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 8 00:39:19.919294 kernel: Run /init as init process May 8 00:39:19.919301 kernel: with arguments: May 8 00:39:19.919309 kernel: /init May 8 00:39:19.919317 kernel: with environment: May 8 00:39:19.919324 kernel: HOME=/ May 8 00:39:19.919332 kernel: TERM=linux May 8 00:39:19.919340 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:39:19.919354 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:39:19.919364 systemd[1]: Detected virtualization kvm. May 8 00:39:19.919373 systemd[1]: Detected architecture x86-64. May 8 00:39:19.919381 systemd[1]: Running in initrd. May 8 00:39:19.919394 systemd[1]: No hostname configured, using default hostname. May 8 00:39:19.919402 systemd[1]: Hostname set to . May 8 00:39:19.919411 systemd[1]: Initializing machine ID from VM UUID. May 8 00:39:19.919419 systemd[1]: Queued start job for default target initrd.target. May 8 00:39:19.919428 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:39:19.919436 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:39:19.919445 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:39:19.919453 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:39:19.919464 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:39:19.919473 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:39:19.919483 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:39:19.919491 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:39:19.919499 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:39:19.919508 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:39:19.919516 systemd[1]: Reached target paths.target - Path Units. May 8 00:39:19.919526 systemd[1]: Reached target slices.target - Slice Units. May 8 00:39:19.919534 systemd[1]: Reached target swap.target - Swaps. May 8 00:39:19.919543 systemd[1]: Reached target timers.target - Timer Units. May 8 00:39:19.919551 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:39:19.919559 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:39:19.919568 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:39:19.919576 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:39:19.919585 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:39:19.919593 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:39:19.919604 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:39:19.919612 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:39:19.919620 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:39:19.919630 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:39:19.919640 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:39:19.919650 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:39:19.919659 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:39:19.919667 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:39:19.919678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:19.919686 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:39:19.919695 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:39:19.919703 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:39:19.919713 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:39:19.919745 systemd-journald[191]: Collecting audit messages is disabled. May 8 00:39:19.919766 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:19.919775 systemd-journald[191]: Journal started May 8 00:39:19.919796 systemd-journald[191]: Runtime Journal (/run/log/journal/ac169f1cc58e4072a98229eed723ac5d) is 6.0M, max 48.3M, 42.2M free. May 8 00:39:19.910088 systemd-modules-load[193]: Inserted module 'overlay' May 8 00:39:19.924024 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:39:19.926980 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:39:19.927128 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:39:19.931300 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:39:19.934097 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:39:19.943496 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:19.948509 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:39:19.953136 kernel: Bridge firewalling registered May 8 00:39:19.951134 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:39:19.952653 systemd-modules-load[193]: Inserted module 'br_netfilter' May 8 00:39:19.953363 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:39:19.954804 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:39:19.957007 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:39:19.960941 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:39:19.974636 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:39:19.976875 dracut-cmdline[221]: dracut-dracut-053 May 8 00:39:19.978428 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:39:19.985195 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:39:20.015427 systemd-resolved[238]: Positive Trust Anchors: May 8 00:39:20.015443 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:39:20.015474 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:39:20.018069 systemd-resolved[238]: Defaulting to hostname 'linux'. May 8 00:39:20.019232 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:39:20.025771 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:39:20.067996 kernel: SCSI subsystem initialized May 8 00:39:20.076978 kernel: Loading iSCSI transport class v2.0-870. May 8 00:39:20.087994 kernel: iscsi: registered transport (tcp) May 8 00:39:20.108996 kernel: iscsi: registered transport (qla4xxx) May 8 00:39:20.109036 kernel: QLogic iSCSI HBA Driver May 8 00:39:20.164438 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:39:20.178121 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:39:20.206740 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:39:20.206822 kernel: device-mapper: uevent: version 1.0.3 May 8 00:39:20.206834 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:39:20.248982 kernel: raid6: avx2x4 gen() 30260 MB/s May 8 00:39:20.265986 kernel: raid6: avx2x2 gen() 31166 MB/s May 8 00:39:20.283069 kernel: raid6: avx2x1 gen() 25990 MB/s May 8 00:39:20.283087 kernel: raid6: using algorithm avx2x2 gen() 31166 MB/s May 8 00:39:20.301105 kernel: raid6: .... xor() 19978 MB/s, rmw enabled May 8 00:39:20.301132 kernel: raid6: using avx2x2 recovery algorithm May 8 00:39:20.320981 kernel: xor: automatically using best checksumming function avx May 8 00:39:20.476010 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:39:20.490404 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:39:20.503088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:39:20.515584 systemd-udevd[413]: Using default interface naming scheme 'v255'. May 8 00:39:20.520348 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:39:20.534141 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:39:20.549172 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation May 8 00:39:20.583259 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:39:20.600203 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:39:20.676403 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:39:20.691151 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:39:20.705297 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:39:20.709231 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:39:20.712070 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:39:20.714617 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:39:20.718045 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:39:20.728467 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:39:20.742166 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 8 00:39:20.753034 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:39:20.753203 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:39:20.753216 kernel: AES CTR mode by8 optimization enabled May 8 00:39:20.753226 kernel: libata version 3.00 loaded. May 8 00:39:20.753237 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:39:20.753247 kernel: GPT:9289727 != 19775487 May 8 00:39:20.753257 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:39:20.753267 kernel: GPT:9289727 != 19775487 May 8 00:39:20.753277 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:39:20.753286 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:39:20.742796 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:39:20.758171 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:39:20.782340 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:39:20.782357 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:39:20.782516 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:39:20.782659 kernel: scsi host0: ahci May 8 00:39:20.782821 kernel: scsi host1: ahci May 8 00:39:20.783036 kernel: scsi host2: ahci May 8 00:39:20.783184 kernel: scsi host3: ahci May 8 00:39:20.783335 kernel: scsi host4: ahci May 8 00:39:20.783484 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) May 8 00:39:20.783495 kernel: scsi host5: ahci May 8 00:39:20.783641 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 8 00:39:20.783656 kernel: BTRFS: device fsid 28014d97-e6d7-4db4-b1d9-76a980e09972 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (471) May 8 00:39:20.783667 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 8 00:39:20.783677 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 8 00:39:20.783687 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 8 00:39:20.783697 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 8 00:39:20.783707 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 8 00:39:20.760478 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:39:20.760641 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:20.762615 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:39:20.766231 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:39:20.766425 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:20.769097 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:20.777633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:20.808207 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:39:20.813985 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:39:20.819653 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:39:20.824149 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:39:20.824616 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:39:20.840172 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:39:20.840459 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:39:20.840527 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:20.843311 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:20.844428 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:20.860513 disk-uuid[555]: Primary Header is updated. May 8 00:39:20.860513 disk-uuid[555]: Secondary Entries is updated. May 8 00:39:20.860513 disk-uuid[555]: Secondary Header is updated. May 8 00:39:20.864987 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:39:20.866453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:20.870982 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:39:20.873150 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:39:20.902123 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:21.096035 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:39:21.096151 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 8 00:39:21.096170 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:39:21.096184 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:39:21.097987 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:39:21.098076 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:39:21.098984 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 8 00:39:21.099992 kernel: ata3.00: applying bridge limits May 8 00:39:21.100985 kernel: ata3.00: configured for UDMA/100 May 8 00:39:21.101025 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 8 00:39:21.151007 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 8 00:39:21.164701 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:39:21.164719 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:39:21.870989 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:39:21.871259 disk-uuid[557]: The operation has completed successfully. May 8 00:39:21.903072 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:39:21.903218 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:39:21.925294 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:39:21.928454 sh[595]: Success May 8 00:39:21.941976 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:39:21.975910 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:39:21.993985 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:39:21.998441 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:39:22.012991 kernel: BTRFS info (device dm-0): first mount of filesystem 28014d97-e6d7-4db4-b1d9-76a980e09972 May 8 00:39:22.013019 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:39:22.013030 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:39:22.014038 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:39:22.015404 kernel: BTRFS info (device dm-0): using free space tree May 8 00:39:22.020172 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:39:22.022664 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:39:22.039175 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:39:22.041888 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:39:22.051476 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:39:22.051522 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:39:22.051533 kernel: BTRFS info (device vda6): using free space tree May 8 00:39:22.053988 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:39:22.063924 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:39:22.066128 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:39:22.079352 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:39:22.089128 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:39:22.146816 ignition[693]: Ignition 2.19.0 May 8 00:39:22.146826 ignition[693]: Stage: fetch-offline May 8 00:39:22.146862 ignition[693]: no configs at "/usr/lib/ignition/base.d" May 8 00:39:22.146872 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:39:22.146988 ignition[693]: parsed url from cmdline: "" May 8 00:39:22.146992 ignition[693]: no config URL provided May 8 00:39:22.146997 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:39:22.147008 ignition[693]: no config at "/usr/lib/ignition/user.ign" May 8 00:39:22.147035 ignition[693]: op(1): [started] loading QEMU firmware config module May 8 00:39:22.147043 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:39:22.158341 ignition[693]: op(1): [finished] loading QEMU firmware config module May 8 00:39:22.169018 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:39:22.180098 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:39:22.200850 ignition[693]: parsing config with SHA512: 1b821e416bc9f8315a53beea40dd81ac03c6ec48d0d2edf074b3d3d2a43cb4e5d3222c8d098c5e75a052520dfc22be0f220bc397d579b59f78d9cf2caf141ec9 May 8 00:39:22.204835 systemd-networkd[782]: lo: Link UP May 8 00:39:22.204847 systemd-networkd[782]: lo: Gained carrier May 8 00:39:22.205864 ignition[693]: fetch-offline: fetch-offline passed May 8 00:39:22.205499 unknown[693]: fetched base config from "system" May 8 00:39:22.205941 ignition[693]: Ignition finished successfully May 8 00:39:22.205508 unknown[693]: fetched user config from "qemu" May 8 00:39:22.206657 systemd-networkd[782]: Enumeration completed May 8 00:39:22.207192 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:39:22.207194 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:22.207198 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:39:22.208255 systemd-networkd[782]: eth0: Link UP May 8 00:39:22.208260 systemd-networkd[782]: eth0: Gained carrier May 8 00:39:22.208267 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:22.210661 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:39:22.213187 systemd[1]: Reached target network.target - Network. May 8 00:39:22.215260 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:39:22.221997 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:39:22.225143 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:39:22.245188 ignition[785]: Ignition 2.19.0 May 8 00:39:22.245200 ignition[785]: Stage: kargs May 8 00:39:22.245376 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 8 00:39:22.245388 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:39:22.246290 ignition[785]: kargs: kargs passed May 8 00:39:22.246340 ignition[785]: Ignition finished successfully May 8 00:39:22.249717 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:39:22.267142 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:39:22.282213 ignition[795]: Ignition 2.19.0 May 8 00:39:22.282227 ignition[795]: Stage: disks May 8 00:39:22.282437 ignition[795]: no configs at "/usr/lib/ignition/base.d" May 8 00:39:22.282455 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:39:22.283313 ignition[795]: disks: disks passed May 8 00:39:22.286022 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:39:22.283363 ignition[795]: Ignition finished successfully May 8 00:39:22.287722 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:39:22.289592 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:39:22.291570 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:39:22.293649 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:39:22.295912 systemd[1]: Reached target basic.target - Basic System. May 8 00:39:22.307077 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:39:22.320461 systemd-resolved[238]: Detected conflict on linux IN A 10.0.0.76 May 8 00:39:22.320475 systemd-resolved[238]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. May 8 00:39:22.322485 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:39:22.333257 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:39:22.343042 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:39:22.431994 kernel: EXT4-fs (vda9): mounted filesystem 36960c89-ba45-4808-a41c-bf61ce9470a3 r/w with ordered data mode. Quota mode: none. May 8 00:39:22.433140 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:39:22.434499 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:39:22.450133 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:39:22.452079 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:39:22.454725 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:39:22.454785 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:39:22.465006 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) May 8 00:39:22.465025 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:39:22.465036 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:39:22.465047 kernel: BTRFS info (device vda6): using free space tree May 8 00:39:22.465057 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:39:22.454812 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:39:22.467462 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:39:22.487205 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:39:22.490061 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:39:22.526706 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:39:22.531818 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory May 8 00:39:22.536596 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:39:22.540284 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:39:22.625923 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:39:22.633073 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:39:22.635155 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:39:22.641984 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:39:22.662430 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:39:22.663819 ignition[925]: INFO : Ignition 2.19.0 May 8 00:39:22.663819 ignition[925]: INFO : Stage: mount May 8 00:39:22.665901 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:39:22.665901 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:39:22.665901 ignition[925]: INFO : mount: mount passed May 8 00:39:22.665901 ignition[925]: INFO : Ignition finished successfully May 8 00:39:22.669935 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:39:22.681107 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:39:23.012171 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:39:23.021208 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:39:23.027976 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) May 8 00:39:23.030031 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:39:23.030046 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:39:23.030056 kernel: BTRFS info (device vda6): using free space tree May 8 00:39:23.033983 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:39:23.035370 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:39:23.056675 ignition[955]: INFO : Ignition 2.19.0 May 8 00:39:23.056675 ignition[955]: INFO : Stage: files May 8 00:39:23.058479 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:39:23.058479 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:39:23.058479 ignition[955]: DEBUG : files: compiled without relabeling support, skipping May 8 00:39:23.062243 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:39:23.062243 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:39:23.065275 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:39:23.066790 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:39:23.066790 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:39:23.065914 unknown[955]: wrote ssh authorized keys file for user: core May 8 00:39:23.070689 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:39:23.070689 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:39:23.070689 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:39:23.070689 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:39:23.118770 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:39:23.232082 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:39:23.234314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:39:23.234314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:39:23.234314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:39:23.234314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:39:23.234314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:39:23.234314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:39:23.234314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:39:23.234314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:39:23.234314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:39:23.234314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:39:23.234314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:39:23.234314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:39:23.234314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:39:23.234314 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 8 00:39:23.595533 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:39:23.968902 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:39:23.968902 ignition[955]: INFO : files: op(c): [started] processing unit "containerd.service" May 8 00:39:23.972595 ignition[955]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:39:23.975207 ignition[955]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:39:23.975207 ignition[955]: INFO : files: op(c): [finished] processing unit "containerd.service" May 8 00:39:23.975207 ignition[955]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 8 00:39:23.979735 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:39:23.981621 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:39:23.981621 ignition[955]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 8 00:39:23.984645 ignition[955]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 8 00:39:23.984645 ignition[955]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:39:23.987842 ignition[955]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:39:23.987842 ignition[955]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 8 00:39:23.990993 ignition[955]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:39:24.011773 ignition[955]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:39:24.017106 ignition[955]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:39:24.018841 ignition[955]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:39:24.018841 ignition[955]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" May 8 00:39:24.021614 ignition[955]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:39:24.023095 ignition[955]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:39:24.024863 ignition[955]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:39:24.026533 ignition[955]: INFO : files: files passed May 8 00:39:24.027273 ignition[955]: INFO : Ignition finished successfully May 8 00:39:24.030391 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:39:24.045132 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:39:24.048033 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:39:24.050705 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:39:24.051775 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:39:24.056048 systemd-networkd[782]: eth0: Gained IPv6LL May 8 00:39:24.057653 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:39:24.061738 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:39:24.061738 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:39:24.065001 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:39:24.068152 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:39:24.070840 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:39:24.078159 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:39:24.102232 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:39:24.102387 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:39:24.103116 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:39:24.103389 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:39:24.103751 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:39:24.104513 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:39:24.126342 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:39:24.137098 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:39:24.146232 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:39:24.147581 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:39:24.149853 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:39:24.151894 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:39:24.152025 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:39:24.154216 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:39:24.156033 systemd[1]: Stopped target basic.target - Basic System. May 8 00:39:24.158062 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:39:24.160094 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:39:24.162131 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:39:24.164328 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:39:24.166487 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:39:24.168801 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:39:24.170826 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:39:24.173066 systemd[1]: Stopped target swap.target - Swaps. May 8 00:39:24.174853 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:39:24.174978 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:39:24.177166 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:39:24.178833 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:39:24.180965 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:39:24.181089 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:39:24.183170 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:39:24.183284 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:39:24.185504 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:39:24.185624 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:39:24.187730 systemd[1]: Stopped target paths.target - Path Units. May 8 00:39:24.189452 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:39:24.193005 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:39:24.195112 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:39:24.197094 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:39:24.198874 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:39:24.198989 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:39:24.200906 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:39:24.201012 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:39:24.203470 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:39:24.203587 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:39:24.205610 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:39:24.205715 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:39:24.219118 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:39:24.220856 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:39:24.221999 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:39:24.222120 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:39:24.224320 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:39:24.224506 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:39:24.230753 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:39:24.231501 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:39:24.234309 ignition[1009]: INFO : Ignition 2.19.0 May 8 00:39:24.234309 ignition[1009]: INFO : Stage: umount May 8 00:39:24.234309 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:39:24.234309 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:39:24.234309 ignition[1009]: INFO : umount: umount passed May 8 00:39:24.234309 ignition[1009]: INFO : Ignition finished successfully May 8 00:39:24.236179 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:39:24.236308 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:39:24.238285 systemd[1]: Stopped target network.target - Network. May 8 00:39:24.240216 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:39:24.240271 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:39:24.242138 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:39:24.242187 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:39:24.244072 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:39:24.244122 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:39:24.246112 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:39:24.246159 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:39:24.248359 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:39:24.250426 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:39:24.253418 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:39:24.255983 systemd-networkd[782]: eth0: DHCPv6 lease lost May 8 00:39:24.258777 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:39:24.259006 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:39:24.261119 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:39:24.261164 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:39:24.271062 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:39:24.272123 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:39:24.272198 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:39:24.274625 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:39:24.277419 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:39:24.277554 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:39:24.291237 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:39:24.291320 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:39:24.291629 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:39:24.291676 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:39:24.294203 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:39:24.294254 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:39:24.296519 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:39:24.296700 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:39:24.298830 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:39:24.298916 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:39:24.300532 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:39:24.300575 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:39:24.300834 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:39:24.300889 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:39:24.306281 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:39:24.306349 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:39:24.309331 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:39:24.309404 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:39:24.321098 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:39:24.321385 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:39:24.321449 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:39:24.323594 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 00:39:24.323644 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:39:24.325817 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:39:24.325875 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:39:24.328401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:39:24.328450 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:24.329138 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:39:24.329254 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:39:24.333494 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:39:24.333593 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:39:24.415447 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:39:24.415573 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:39:24.416459 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:39:24.418451 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:39:24.418502 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:39:24.434096 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:39:24.442357 systemd[1]: Switching root. May 8 00:39:24.471236 systemd-journald[191]: Journal stopped May 8 00:39:25.601941 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). May 8 00:39:25.602127 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:39:25.602143 kernel: SELinux: policy capability open_perms=1 May 8 00:39:25.602154 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:39:25.602166 kernel: SELinux: policy capability always_check_network=0 May 8 00:39:25.602183 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:39:25.602219 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:39:25.602231 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:39:25.602243 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:39:25.602254 kernel: audit: type=1403 audit(1746664764.868:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:39:25.602268 systemd[1]: Successfully loaded SELinux policy in 38.693ms. May 8 00:39:25.602294 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.840ms. May 8 00:39:25.602308 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:39:25.602321 systemd[1]: Detected virtualization kvm. May 8 00:39:25.602335 systemd[1]: Detected architecture x86-64. May 8 00:39:25.602359 systemd[1]: Detected first boot. May 8 00:39:25.602371 systemd[1]: Initializing machine ID from VM UUID. May 8 00:39:25.602383 zram_generator::config[1070]: No configuration found. May 8 00:39:25.602399 systemd[1]: Populated /etc with preset unit settings. May 8 00:39:25.602411 systemd[1]: Queued start job for default target multi-user.target. May 8 00:39:25.602424 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:39:25.602437 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:39:25.602449 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:39:25.602470 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:39:25.602483 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:39:25.602495 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:39:25.602508 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:39:25.602520 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:39:25.602533 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:39:25.602545 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:39:25.602557 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:39:25.602570 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:39:25.602590 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:39:25.602607 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:39:25.602620 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:39:25.602632 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:39:25.602645 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:39:25.602685 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:39:25.602699 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:39:25.602713 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:39:25.602725 systemd[1]: Reached target slices.target - Slice Units. May 8 00:39:25.602742 systemd[1]: Reached target swap.target - Swaps. May 8 00:39:25.602754 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:39:25.602766 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:39:25.602778 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:39:25.602791 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:39:25.602803 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:39:25.602823 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:39:25.602835 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:39:25.602850 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:39:25.602862 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:39:25.602879 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:39:25.602892 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:39:25.602904 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:25.602917 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:39:25.602929 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:39:25.602941 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:39:25.602965 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:39:25.602981 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:39:25.602993 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:39:25.603007 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:39:25.603019 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:39:25.603031 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:39:25.603044 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:39:25.603056 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:39:25.603073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:39:25.603088 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:39:25.603101 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 8 00:39:25.603113 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 8 00:39:25.603126 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:39:25.603137 kernel: fuse: init (API version 7.39) May 8 00:39:25.603149 kernel: loop: module loaded May 8 00:39:25.603161 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:39:25.603174 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:39:25.603189 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:39:25.603207 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:39:25.603220 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:25.603232 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:39:25.603245 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:39:25.603257 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:39:25.603270 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:39:25.603283 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:39:25.603295 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:39:25.603311 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:39:25.603343 systemd-journald[1155]: Collecting audit messages is disabled. May 8 00:39:25.603365 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:39:25.603377 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:39:25.603390 systemd-journald[1155]: Journal started May 8 00:39:25.603411 systemd-journald[1155]: Runtime Journal (/run/log/journal/ac169f1cc58e4072a98229eed723ac5d) is 6.0M, max 48.3M, 42.2M free. May 8 00:39:25.605016 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:39:25.607321 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:39:25.607555 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:39:25.609149 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:39:25.609499 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:39:25.611083 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:39:25.611311 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:39:25.612797 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:39:25.614326 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:39:25.614547 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:39:25.616284 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:39:25.617877 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:39:25.619516 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:39:25.632294 kernel: ACPI: bus type drm_connector registered May 8 00:39:25.633751 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:39:25.635612 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:39:25.637782 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:39:25.645017 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:39:25.647386 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:39:25.648563 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:39:25.651182 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:39:25.656272 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:39:25.658552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:39:25.661181 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:39:25.663272 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:39:25.667168 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:39:25.671809 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:39:25.675361 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:39:25.676875 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:39:25.681805 systemd-journald[1155]: Time spent on flushing to /var/log/journal/ac169f1cc58e4072a98229eed723ac5d is 14.805ms for 989 entries. May 8 00:39:25.681805 systemd-journald[1155]: System Journal (/var/log/journal/ac169f1cc58e4072a98229eed723ac5d) is 8.0M, max 195.6M, 187.6M free. May 8 00:39:25.715270 systemd-journald[1155]: Received client request to flush runtime journal. May 8 00:39:25.681491 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:39:25.688299 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:39:25.690862 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:39:25.693178 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:39:25.706480 udevadm[1213]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:39:25.713024 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:39:25.717704 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:39:25.723290 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. May 8 00:39:25.723309 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. May 8 00:39:25.730073 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:39:25.742170 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:39:25.772606 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:39:25.782318 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:39:25.799203 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. May 8 00:39:25.799227 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. May 8 00:39:25.805040 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:39:26.227669 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:39:26.241396 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:39:26.267139 systemd-udevd[1237]: Using default interface naming scheme 'v255'. May 8 00:39:26.285584 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:39:26.300111 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:39:26.307788 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:39:26.330655 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 8 00:39:26.331244 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1253) May 8 00:39:26.379610 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:39:26.394765 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:39:26.408688 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:39:26.408866 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 8 00:39:26.411528 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:39:26.419618 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:39:26.420842 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:39:26.421023 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:39:26.431329 kernel: ACPI: button: Power Button [PWRF] May 8 00:39:26.449122 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:39:26.461237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:26.465462 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:39:26.465826 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:26.467625 systemd-networkd[1243]: lo: Link UP May 8 00:39:26.467629 systemd-networkd[1243]: lo: Gained carrier May 8 00:39:26.472321 systemd-networkd[1243]: Enumeration completed May 8 00:39:26.472731 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:26.472736 systemd-networkd[1243]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:39:26.474475 systemd-networkd[1243]: eth0: Link UP May 8 00:39:26.474483 systemd-networkd[1243]: eth0: Gained carrier May 8 00:39:26.474495 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:39:26.477142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:39:26.478993 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:39:26.484196 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:39:26.521316 systemd-networkd[1243]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:39:26.548080 kernel: kvm_amd: TSC scaling supported May 8 00:39:26.548180 kernel: kvm_amd: Nested Virtualization enabled May 8 00:39:26.548198 kernel: kvm_amd: Nested Paging enabled May 8 00:39:26.548210 kernel: kvm_amd: LBR virtualization supported May 8 00:39:26.549364 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 8 00:39:26.549408 kernel: kvm_amd: Virtual GIF supported May 8 00:39:26.571023 kernel: EDAC MC: Ver: 3.0.0 May 8 00:39:26.581946 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:39:26.599442 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:39:26.613171 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:39:26.621662 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:39:26.650090 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:39:26.651638 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:39:26.666080 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:39:26.671469 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:39:26.708145 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:39:26.709670 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:39:26.710938 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:39:26.710975 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:39:26.712014 systemd[1]: Reached target machines.target - Containers. May 8 00:39:26.714073 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:39:26.728093 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:39:26.730608 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:39:26.731764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:39:26.732698 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:39:26.735991 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:39:26.740290 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:39:26.742440 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:39:26.753388 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:39:26.753973 kernel: loop0: detected capacity change from 0 to 210664 May 8 00:39:26.767387 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:39:26.768296 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:39:26.772988 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:39:26.800991 kernel: loop1: detected capacity change from 0 to 140768 May 8 00:39:26.837987 kernel: loop2: detected capacity change from 0 to 142488 May 8 00:39:26.876993 kernel: loop3: detected capacity change from 0 to 210664 May 8 00:39:26.885980 kernel: loop4: detected capacity change from 0 to 140768 May 8 00:39:26.895987 kernel: loop5: detected capacity change from 0 to 142488 May 8 00:39:26.905312 (sd-merge)[1311]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:39:26.906129 (sd-merge)[1311]: Merged extensions into '/usr'. May 8 00:39:26.910802 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:39:26.910821 systemd[1]: Reloading... May 8 00:39:26.971114 zram_generator::config[1341]: No configuration found. May 8 00:39:27.008257 ldconfig[1295]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:39:27.106381 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:27.170502 systemd[1]: Reloading finished in 259 ms. May 8 00:39:27.190385 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:39:27.191937 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:39:27.205142 systemd[1]: Starting ensure-sysext.service... May 8 00:39:27.207232 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:39:27.212530 systemd[1]: Reloading requested from client PID 1383 ('systemctl') (unit ensure-sysext.service)... May 8 00:39:27.212545 systemd[1]: Reloading... May 8 00:39:27.232576 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:39:27.233119 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:39:27.234147 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:39:27.234489 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. May 8 00:39:27.234601 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. May 8 00:39:27.238454 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:39:27.238471 systemd-tmpfiles[1384]: Skipping /boot May 8 00:39:27.255665 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:39:27.255682 systemd-tmpfiles[1384]: Skipping /boot May 8 00:39:27.270010 zram_generator::config[1421]: No configuration found. May 8 00:39:27.382249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:27.463609 systemd[1]: Reloading finished in 250 ms. May 8 00:39:27.483598 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:39:27.508657 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:39:27.512000 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:39:27.515027 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:39:27.520148 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:39:27.525214 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:39:27.530355 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:27.533780 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:39:27.536378 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:39:27.540736 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:39:27.547046 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:39:27.548314 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:39:27.548424 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:27.554489 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:39:27.554764 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:39:27.557610 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:39:27.559919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:39:27.560216 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:39:27.563374 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:39:27.563774 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:39:27.575151 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:39:27.582877 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:27.583339 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:39:27.584443 augenrules[1492]: No rules May 8 00:39:27.591282 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:39:27.594199 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:39:27.600164 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:39:27.603712 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:39:27.606234 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:39:27.609836 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:39:27.610973 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:39:27.612772 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:39:27.614775 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:39:27.616817 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:39:27.617070 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:39:27.618881 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:39:27.619111 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:39:27.620672 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:39:27.620900 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:39:27.621260 systemd-resolved[1461]: Positive Trust Anchors: May 8 00:39:27.621275 systemd-resolved[1461]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:39:27.621307 systemd-resolved[1461]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:39:27.622608 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:39:27.622860 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:39:27.625212 systemd-resolved[1461]: Defaulting to hostname 'linux'. May 8 00:39:27.626694 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:39:27.628536 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:39:27.630192 systemd[1]: Finished ensure-sysext.service. May 8 00:39:27.638462 systemd[1]: Reached target network.target - Network. May 8 00:39:27.639533 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:39:27.640807 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:39:27.640891 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:39:27.655088 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:39:27.656263 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:39:27.718122 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:39:27.719106 systemd-timesyncd[1519]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:39:27.719155 systemd-timesyncd[1519]: Initial clock synchronization to Thu 2025-05-08 00:39:27.782691 UTC. May 8 00:39:27.719857 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:39:27.721059 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:39:27.722346 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:39:27.723654 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:39:27.724967 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:39:27.724992 systemd[1]: Reached target paths.target - Path Units. May 8 00:39:27.725933 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:39:27.727167 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:39:27.728422 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:39:27.729717 systemd[1]: Reached target timers.target - Timer Units. May 8 00:39:27.731357 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:39:27.734364 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:39:27.736692 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:39:27.742278 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:39:27.743431 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:39:27.744445 systemd[1]: Reached target basic.target - Basic System. May 8 00:39:27.745571 systemd[1]: System is tainted: cgroupsv1 May 8 00:39:27.745609 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:39:27.745632 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:39:27.746901 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:39:27.749166 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:39:27.751253 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:39:27.756086 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:39:27.757209 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:39:27.758405 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:39:27.760210 jq[1525]: false May 8 00:39:27.763187 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:39:27.768170 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:39:27.772259 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:39:27.776580 extend-filesystems[1526]: Found loop3 May 8 00:39:27.777611 extend-filesystems[1526]: Found loop4 May 8 00:39:27.777611 extend-filesystems[1526]: Found loop5 May 8 00:39:27.777611 extend-filesystems[1526]: Found sr0 May 8 00:39:27.777611 extend-filesystems[1526]: Found vda May 8 00:39:27.777611 extend-filesystems[1526]: Found vda1 May 8 00:39:27.779520 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:39:27.789050 extend-filesystems[1526]: Found vda2 May 8 00:39:27.789050 extend-filesystems[1526]: Found vda3 May 8 00:39:27.789050 extend-filesystems[1526]: Found usr May 8 00:39:27.789050 extend-filesystems[1526]: Found vda4 May 8 00:39:27.789050 extend-filesystems[1526]: Found vda6 May 8 00:39:27.789050 extend-filesystems[1526]: Found vda7 May 8 00:39:27.789050 extend-filesystems[1526]: Found vda9 May 8 00:39:27.789050 extend-filesystems[1526]: Checking size of /dev/vda9 May 8 00:39:27.777708 dbus-daemon[1524]: [system] SELinux support is enabled May 8 00:39:27.781236 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:39:27.807623 extend-filesystems[1526]: Resized partition /dev/vda9 May 8 00:39:27.786093 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:39:27.811728 extend-filesystems[1551]: resize2fs 1.47.1 (20-May-2024) May 8 00:39:27.792047 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:39:27.796098 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:39:27.816553 jq[1543]: true May 8 00:39:27.808148 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:39:27.808478 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:39:27.820731 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1246) May 8 00:39:27.820767 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:39:27.808907 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:39:27.809235 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:39:27.814377 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:39:27.814692 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:39:27.844882 jq[1557]: true May 8 00:39:27.875193 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:39:27.875241 update_engine[1538]: I20250508 00:39:27.841915 1538 main.cc:92] Flatcar Update Engine starting May 8 00:39:27.875241 update_engine[1538]: I20250508 00:39:27.845733 1538 update_check_scheduler.cc:74] Next update check in 2m33s May 8 00:39:27.850743 (ntainerd)[1558]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:39:27.875825 tar[1555]: linux-amd64/helm May 8 00:39:27.871232 systemd[1]: Started update-engine.service - Update Engine. May 8 00:39:27.873458 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:39:27.873485 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:39:27.874931 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:39:27.874947 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:39:27.876470 extend-filesystems[1551]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:39:27.876470 extend-filesystems[1551]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:39:27.876470 extend-filesystems[1551]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:39:27.886004 extend-filesystems[1526]: Resized filesystem in /dev/vda9 May 8 00:39:27.876790 systemd-logind[1534]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:39:27.876811 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:39:27.878911 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:39:27.881306 systemd-logind[1534]: New seat seat0. May 8 00:39:27.887278 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:39:27.889183 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:39:27.891238 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:39:27.891549 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:39:27.911537 bash[1585]: Updated "/home/core/.ssh/authorized_keys" May 8 00:39:27.917089 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:39:27.920911 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:39:27.923858 locksmithd[1587]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:39:28.034908 sshd_keygen[1552]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:39:28.046156 containerd[1558]: time="2025-05-08T00:39:28.046075640Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:39:28.059691 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:39:28.068345 containerd[1558]: time="2025-05-08T00:39:28.068180785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:28.069248 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:39:28.071052 containerd[1558]: time="2025-05-08T00:39:28.071019549Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:28.071052 containerd[1558]: time="2025-05-08T00:39:28.071049431Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:39:28.071144 containerd[1558]: time="2025-05-08T00:39:28.071066397Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:39:28.071287 containerd[1558]: time="2025-05-08T00:39:28.071258341Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:39:28.071287 containerd[1558]: time="2025-05-08T00:39:28.071280589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:39:28.071378 containerd[1558]: time="2025-05-08T00:39:28.071359752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:28.071406 containerd[1558]: time="2025-05-08T00:39:28.071377697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:28.071661 containerd[1558]: time="2025-05-08T00:39:28.071632990Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:28.071661 containerd[1558]: time="2025-05-08T00:39:28.071653288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:39:28.071707 containerd[1558]: time="2025-05-08T00:39:28.071666629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:28.071707 containerd[1558]: time="2025-05-08T00:39:28.071677646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:39:28.071792 containerd[1558]: time="2025-05-08T00:39:28.071773573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:28.072048 containerd[1558]: time="2025-05-08T00:39:28.072029502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:39:28.072221 containerd[1558]: time="2025-05-08T00:39:28.072201088Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:39:28.072221 containerd[1558]: time="2025-05-08T00:39:28.072217923Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:39:28.072339 containerd[1558]: time="2025-05-08T00:39:28.072314435Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:39:28.072408 containerd[1558]: time="2025-05-08T00:39:28.072392498Z" level=info msg="metadata content store policy set" policy=shared May 8 00:39:28.077114 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:39:28.077527 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:39:28.080746 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:39:28.082431 containerd[1558]: time="2025-05-08T00:39:28.082360983Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:39:28.082431 containerd[1558]: time="2025-05-08T00:39:28.082417394Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:39:28.082431 containerd[1558]: time="2025-05-08T00:39:28.082433340Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:39:28.082527 containerd[1558]: time="2025-05-08T00:39:28.082448840Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:39:28.082527 containerd[1558]: time="2025-05-08T00:39:28.082462837Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:39:28.082671 containerd[1558]: time="2025-05-08T00:39:28.082596462Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:39:28.083055 containerd[1558]: time="2025-05-08T00:39:28.083023200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:39:28.083719 containerd[1558]: time="2025-05-08T00:39:28.083370481Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:39:28.083719 containerd[1558]: time="2025-05-08T00:39:28.083391406Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:39:28.083719 containerd[1558]: time="2025-05-08T00:39:28.083415279Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:39:28.083719 containerd[1558]: time="2025-05-08T00:39:28.083431902Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:39:28.083719 containerd[1558]: time="2025-05-08T00:39:28.083445333Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:39:28.083719 containerd[1558]: time="2025-05-08T00:39:28.083459824Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:39:28.083719 containerd[1558]: time="2025-05-08T00:39:28.083476628Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:39:28.083719 containerd[1558]: time="2025-05-08T00:39:28.083493281Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:39:28.083719 containerd[1558]: time="2025-05-08T00:39:28.083506762Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:39:28.083719 containerd[1558]: time="2025-05-08T00:39:28.083519487Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:39:28.083719 containerd[1558]: time="2025-05-08T00:39:28.083532130Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:39:28.083719 containerd[1558]: time="2025-05-08T00:39:28.083553964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:39:28.083719 containerd[1558]: time="2025-05-08T00:39:28.083567274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:39:28.083719 containerd[1558]: time="2025-05-08T00:39:28.083580361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:39:28.084052 containerd[1558]: time="2025-05-08T00:39:28.083593591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:39:28.084052 containerd[1558]: time="2025-05-08T00:39:28.083605891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:39:28.084052 containerd[1558]: time="2025-05-08T00:39:28.083619484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:39:28.084052 containerd[1558]: time="2025-05-08T00:39:28.083639710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:39:28.084052 containerd[1558]: time="2025-05-08T00:39:28.083655272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:39:28.084052 containerd[1558]: time="2025-05-08T00:39:28.083669492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:39:28.084052 containerd[1558]: time="2025-05-08T00:39:28.083684205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:39:28.084052 containerd[1558]: time="2025-05-08T00:39:28.083696708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:39:28.084052 containerd[1558]: time="2025-05-08T00:39:28.083708150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:39:28.084052 containerd[1558]: time="2025-05-08T00:39:28.083720985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:39:28.084052 containerd[1558]: time="2025-05-08T00:39:28.083736688Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:39:28.084052 containerd[1558]: time="2025-05-08T00:39:28.083754997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:39:28.084052 containerd[1558]: time="2025-05-08T00:39:28.083766893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:39:28.084052 containerd[1558]: time="2025-05-08T00:39:28.083779112Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:39:28.084298 containerd[1558]: time="2025-05-08T00:39:28.083845097Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:39:28.084298 containerd[1558]: time="2025-05-08T00:39:28.083862688Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:39:28.084298 containerd[1558]: time="2025-05-08T00:39:28.083873050Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:39:28.084298 containerd[1558]: time="2025-05-08T00:39:28.084023236Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:39:28.084298 containerd[1558]: time="2025-05-08T00:39:28.084037547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:39:28.084298 containerd[1558]: time="2025-05-08T00:39:28.084050634Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:39:28.084298 containerd[1558]: time="2025-05-08T00:39:28.084060944Z" level=info msg="NRI interface is disabled by configuration." May 8 00:39:28.084298 containerd[1558]: time="2025-05-08T00:39:28.084072548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:39:28.084459 containerd[1558]: time="2025-05-08T00:39:28.084324519Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:39:28.084459 containerd[1558]: time="2025-05-08T00:39:28.084374133Z" level=info msg="Connect containerd service" May 8 00:39:28.084459 containerd[1558]: time="2025-05-08T00:39:28.084404601Z" level=info msg="using legacy CRI server" May 8 00:39:28.084459 containerd[1558]: time="2025-05-08T00:39:28.084411296Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:39:28.084639 containerd[1558]: time="2025-05-08T00:39:28.084517898Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:39:28.085135 containerd[1558]: time="2025-05-08T00:39:28.085045773Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:39:28.085370 containerd[1558]: time="2025-05-08T00:39:28.085230204Z" level=info msg="Start subscribing containerd event" May 8 00:39:28.085370 containerd[1558]: time="2025-05-08T00:39:28.085316376Z" level=info msg="Start recovering state" May 8 00:39:28.085416 containerd[1558]: time="2025-05-08T00:39:28.085371202Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:39:28.085471 containerd[1558]: time="2025-05-08T00:39:28.085432247Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:39:28.088283 containerd[1558]: time="2025-05-08T00:39:28.087044099Z" level=info msg="Start event monitor" May 8 00:39:28.088283 containerd[1558]: time="2025-05-08T00:39:28.087078727Z" level=info msg="Start snapshots syncer" May 8 00:39:28.088283 containerd[1558]: time="2025-05-08T00:39:28.087095804Z" level=info msg="Start cni network conf syncer for default" May 8 00:39:28.088283 containerd[1558]: time="2025-05-08T00:39:28.087115517Z" level=info msg="Start streaming server" May 8 00:39:28.088283 containerd[1558]: time="2025-05-08T00:39:28.087190883Z" level=info msg="containerd successfully booted in 0.042295s" May 8 00:39:28.087435 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:39:28.096927 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:39:28.106233 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:39:28.108552 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:39:28.109869 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:39:28.216091 systemd-networkd[1243]: eth0: Gained IPv6LL May 8 00:39:28.219446 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:39:28.221257 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:39:28.236256 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:39:28.239236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:28.244163 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:39:28.252813 tar[1555]: linux-amd64/LICENSE May 8 00:39:28.252883 tar[1555]: linux-amd64/README.md May 8 00:39:28.267369 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:39:28.270478 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:39:28.270922 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:39:28.273862 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:39:28.277119 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:39:28.881952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:28.883788 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:39:28.885015 systemd[1]: Startup finished in 5.968s (kernel) + 4.053s (userspace) = 10.022s. May 8 00:39:28.888306 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:39:29.356979 kubelet[1661]: E0508 00:39:29.356894 1661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:29.361846 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:29.362162 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:32.300371 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:39:32.316258 systemd[1]: Started sshd@0-10.0.0.76:22-10.0.0.1:55060.service - OpenSSH per-connection server daemon (10.0.0.1:55060). May 8 00:39:32.360970 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 55060 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:32.363195 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:32.373122 systemd-logind[1534]: New session 1 of user core. May 8 00:39:32.374334 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:39:32.383174 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:39:32.395626 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:39:32.409370 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:39:32.413508 (systemd)[1680]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:39:32.538917 systemd[1680]: Queued start job for default target default.target. May 8 00:39:32.539395 systemd[1680]: Created slice app.slice - User Application Slice. May 8 00:39:32.539428 systemd[1680]: Reached target paths.target - Paths. May 8 00:39:32.539447 systemd[1680]: Reached target timers.target - Timers. May 8 00:39:32.549088 systemd[1680]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:39:32.557107 systemd[1680]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:39:32.557195 systemd[1680]: Reached target sockets.target - Sockets. May 8 00:39:32.557212 systemd[1680]: Reached target basic.target - Basic System. May 8 00:39:32.557262 systemd[1680]: Reached target default.target - Main User Target. May 8 00:39:32.557309 systemd[1680]: Startup finished in 134ms. May 8 00:39:32.557876 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:39:32.559874 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:39:32.624593 systemd[1]: Started sshd@1-10.0.0.76:22-10.0.0.1:55076.service - OpenSSH per-connection server daemon (10.0.0.1:55076). May 8 00:39:32.659404 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 55076 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:32.661253 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:32.665774 systemd-logind[1534]: New session 2 of user core. May 8 00:39:32.676226 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:39:32.731134 sshd[1693]: pam_unix(sshd:session): session closed for user core May 8 00:39:32.749267 systemd[1]: Started sshd@2-10.0.0.76:22-10.0.0.1:55092.service - OpenSSH per-connection server daemon (10.0.0.1:55092). May 8 00:39:32.750055 systemd[1]: sshd@1-10.0.0.76:22-10.0.0.1:55076.service: Deactivated successfully. May 8 00:39:32.752095 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:39:32.752789 systemd-logind[1534]: Session 2 logged out. Waiting for processes to exit. May 8 00:39:32.754170 systemd-logind[1534]: Removed session 2. May 8 00:39:32.784431 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 55092 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:32.786296 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:32.790602 systemd-logind[1534]: New session 3 of user core. May 8 00:39:32.805271 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:39:32.856051 sshd[1698]: pam_unix(sshd:session): session closed for user core May 8 00:39:32.865229 systemd[1]: Started sshd@3-10.0.0.76:22-10.0.0.1:55094.service - OpenSSH per-connection server daemon (10.0.0.1:55094). May 8 00:39:32.865712 systemd[1]: sshd@2-10.0.0.76:22-10.0.0.1:55092.service: Deactivated successfully. May 8 00:39:32.868145 systemd-logind[1534]: Session 3 logged out. Waiting for processes to exit. May 8 00:39:32.868801 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:39:32.870627 systemd-logind[1534]: Removed session 3. May 8 00:39:32.898587 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 55094 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:32.900299 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:32.904945 systemd-logind[1534]: New session 4 of user core. May 8 00:39:32.916304 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:39:32.970994 sshd[1706]: pam_unix(sshd:session): session closed for user core May 8 00:39:32.979223 systemd[1]: Started sshd@4-10.0.0.76:22-10.0.0.1:55096.service - OpenSSH per-connection server daemon (10.0.0.1:55096). May 8 00:39:32.979797 systemd[1]: sshd@3-10.0.0.76:22-10.0.0.1:55094.service: Deactivated successfully. May 8 00:39:32.982101 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:39:32.982792 systemd-logind[1534]: Session 4 logged out. Waiting for processes to exit. May 8 00:39:32.984025 systemd-logind[1534]: Removed session 4. May 8 00:39:33.011197 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 55096 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:33.012740 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:33.016983 systemd-logind[1534]: New session 5 of user core. May 8 00:39:33.027248 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:39:33.086726 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:39:33.087098 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:33.102804 sudo[1721]: pam_unix(sudo:session): session closed for user root May 8 00:39:33.104753 sshd[1715]: pam_unix(sshd:session): session closed for user core May 8 00:39:33.113327 systemd[1]: Started sshd@5-10.0.0.76:22-10.0.0.1:55102.service - OpenSSH per-connection server daemon (10.0.0.1:55102). May 8 00:39:33.114156 systemd[1]: sshd@4-10.0.0.76:22-10.0.0.1:55096.service: Deactivated successfully. May 8 00:39:33.116389 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:39:33.117182 systemd-logind[1534]: Session 5 logged out. Waiting for processes to exit. May 8 00:39:33.118695 systemd-logind[1534]: Removed session 5. May 8 00:39:33.147124 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 55102 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:33.148730 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:33.152832 systemd-logind[1534]: New session 6 of user core. May 8 00:39:33.166346 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:39:33.221256 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:39:33.221612 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:33.225565 sudo[1731]: pam_unix(sudo:session): session closed for user root May 8 00:39:33.232119 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:39:33.232567 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:33.252223 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 00:39:33.254017 auditctl[1734]: No rules May 8 00:39:33.255406 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:39:33.255772 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 00:39:33.257858 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:39:33.289283 augenrules[1753]: No rules May 8 00:39:33.291348 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:39:33.292563 sudo[1730]: pam_unix(sudo:session): session closed for user root May 8 00:39:33.294821 sshd[1724]: pam_unix(sshd:session): session closed for user core May 8 00:39:33.301269 systemd[1]: Started sshd@6-10.0.0.76:22-10.0.0.1:55116.service - OpenSSH per-connection server daemon (10.0.0.1:55116). May 8 00:39:33.301975 systemd[1]: sshd@5-10.0.0.76:22-10.0.0.1:55102.service: Deactivated successfully. May 8 00:39:33.304189 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:39:33.304775 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. May 8 00:39:33.306111 systemd-logind[1534]: Removed session 6. May 8 00:39:33.336230 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 55116 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:33.337866 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:33.342164 systemd-logind[1534]: New session 7 of user core. May 8 00:39:33.364366 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:39:33.418688 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:39:33.419039 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:39:33.702183 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:39:33.702576 (dockerd)[1784]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:39:33.982331 dockerd[1784]: time="2025-05-08T00:39:33.982182430Z" level=info msg="Starting up" May 8 00:39:34.725156 dockerd[1784]: time="2025-05-08T00:39:34.725082355Z" level=info msg="Loading containers: start." May 8 00:39:34.848996 kernel: Initializing XFRM netlink socket May 8 00:39:34.928242 systemd-networkd[1243]: docker0: Link UP May 8 00:39:34.954713 dockerd[1784]: time="2025-05-08T00:39:34.954662970Z" level=info msg="Loading containers: done." May 8 00:39:34.970769 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1856915967-merged.mount: Deactivated successfully. May 8 00:39:34.971812 dockerd[1784]: time="2025-05-08T00:39:34.971765116Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:39:34.971899 dockerd[1784]: time="2025-05-08T00:39:34.971874762Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 00:39:34.972062 dockerd[1784]: time="2025-05-08T00:39:34.972034430Z" level=info msg="Daemon has completed initialization" May 8 00:39:35.017602 dockerd[1784]: time="2025-05-08T00:39:35.017342472Z" level=info msg="API listen on /run/docker.sock" May 8 00:39:35.017652 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:39:35.768512 containerd[1558]: time="2025-05-08T00:39:35.768462828Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:39:36.380617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073898336.mount: Deactivated successfully. May 8 00:39:37.374654 containerd[1558]: time="2025-05-08T00:39:37.374583708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:37.375423 containerd[1558]: time="2025-05-08T00:39:37.375384119Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 8 00:39:37.377341 containerd[1558]: time="2025-05-08T00:39:37.377292451Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:37.380396 containerd[1558]: time="2025-05-08T00:39:37.380348172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:37.381382 containerd[1558]: time="2025-05-08T00:39:37.381331472Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 1.612823787s" May 8 00:39:37.381382 containerd[1558]: time="2025-05-08T00:39:37.381364663Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 8 00:39:37.403422 containerd[1558]: time="2025-05-08T00:39:37.403375416Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:39:39.039332 containerd[1558]: time="2025-05-08T00:39:39.039246321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:39.072208 containerd[1558]: time="2025-05-08T00:39:39.072147592Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 8 00:39:39.086868 containerd[1558]: time="2025-05-08T00:39:39.086807841Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:39.094975 containerd[1558]: time="2025-05-08T00:39:39.094901645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:39.096075 containerd[1558]: time="2025-05-08T00:39:39.096029762Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.692611598s" May 8 00:39:39.096075 containerd[1558]: time="2025-05-08T00:39:39.096068666Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 8 00:39:39.119737 containerd[1558]: time="2025-05-08T00:39:39.119693071Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:39:39.555561 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:39:39.565113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:39.723301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:39.729237 (kubelet)[2022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:39:39.808774 kubelet[2022]: E0508 00:39:39.808601 2022 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:39.816458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:39.816747 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:40.851548 containerd[1558]: time="2025-05-08T00:39:40.851472310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:40.852228 containerd[1558]: time="2025-05-08T00:39:40.852169764Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 8 00:39:40.853363 containerd[1558]: time="2025-05-08T00:39:40.853332816Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:40.856084 containerd[1558]: time="2025-05-08T00:39:40.856052266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:40.857047 containerd[1558]: time="2025-05-08T00:39:40.857013688Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.737283168s" May 8 00:39:40.857047 containerd[1558]: time="2025-05-08T00:39:40.857040351Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 8 00:39:40.878345 containerd[1558]: time="2025-05-08T00:39:40.878307968Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:39:42.030245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123432170.mount: Deactivated successfully. May 8 00:39:42.830372 containerd[1558]: time="2025-05-08T00:39:42.830299793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:42.831345 containerd[1558]: time="2025-05-08T00:39:42.831307357Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 8 00:39:42.832699 containerd[1558]: time="2025-05-08T00:39:42.832649860Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:42.834507 containerd[1558]: time="2025-05-08T00:39:42.834469652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:42.835117 containerd[1558]: time="2025-05-08T00:39:42.835072421Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.956725012s" May 8 00:39:42.835158 containerd[1558]: time="2025-05-08T00:39:42.835117802Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 8 00:39:42.856745 containerd[1558]: time="2025-05-08T00:39:42.856707686Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:39:43.434577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3372895442.mount: Deactivated successfully. May 8 00:39:44.108093 containerd[1558]: time="2025-05-08T00:39:44.108037749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:44.109209 containerd[1558]: time="2025-05-08T00:39:44.109174248Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 8 00:39:44.110452 containerd[1558]: time="2025-05-08T00:39:44.110407166Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:44.113162 containerd[1558]: time="2025-05-08T00:39:44.113134241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:44.115431 containerd[1558]: time="2025-05-08T00:39:44.114493635Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.257755268s" May 8 00:39:44.116443 containerd[1558]: time="2025-05-08T00:39:44.115491493Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:39:44.137096 containerd[1558]: time="2025-05-08T00:39:44.137073413Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:39:44.635204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223815066.mount: Deactivated successfully. May 8 00:39:44.646003 containerd[1558]: time="2025-05-08T00:39:44.645928976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:44.646668 containerd[1558]: time="2025-05-08T00:39:44.646619980Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 8 00:39:44.649452 containerd[1558]: time="2025-05-08T00:39:44.649422235Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:44.653422 containerd[1558]: time="2025-05-08T00:39:44.653386901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:44.654108 containerd[1558]: time="2025-05-08T00:39:44.654070425Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 516.970196ms" May 8 00:39:44.654108 containerd[1558]: time="2025-05-08T00:39:44.654102435Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 8 00:39:44.674226 containerd[1558]: time="2025-05-08T00:39:44.674206004Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:39:45.259358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1192776785.mount: Deactivated successfully. May 8 00:39:48.419894 containerd[1558]: time="2025-05-08T00:39:48.419822404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:48.420625 containerd[1558]: time="2025-05-08T00:39:48.420592473Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 8 00:39:48.421739 containerd[1558]: time="2025-05-08T00:39:48.421649697Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:48.428055 containerd[1558]: time="2025-05-08T00:39:48.427989998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:39:48.429075 containerd[1558]: time="2025-05-08T00:39:48.429027605Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.754792172s" May 8 00:39:48.429128 containerd[1558]: time="2025-05-08T00:39:48.429073325Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 8 00:39:50.055585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:39:50.064110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:50.211360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:50.217347 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:39:50.263924 kubelet[2252]: E0508 00:39:50.263854 2252 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:39:50.268217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:39:50.268595 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:39:51.174569 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:51.189167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:51.209711 systemd[1]: Reloading requested from client PID 2269 ('systemctl') (unit session-7.scope)... May 8 00:39:51.209736 systemd[1]: Reloading... May 8 00:39:51.305988 zram_generator::config[2308]: No configuration found. May 8 00:39:52.892526 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:52.975964 systemd[1]: Reloading finished in 1765 ms. May 8 00:39:53.024513 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 00:39:53.024622 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 00:39:53.025004 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:53.032370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:53.182196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:53.187117 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:39:53.228494 kubelet[2368]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:53.228494 kubelet[2368]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:39:53.228494 kubelet[2368]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:39:53.228934 kubelet[2368]: I0508 00:39:53.228569 2368 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:39:53.481574 kubelet[2368]: I0508 00:39:53.481451 2368 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:39:53.481574 kubelet[2368]: I0508 00:39:53.481481 2368 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:39:53.481701 kubelet[2368]: I0508 00:39:53.481686 2368 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:39:54.255985 kubelet[2368]: I0508 00:39:54.255887 2368 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:39:54.256639 kubelet[2368]: E0508 00:39:54.256614 2368 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:54.267584 kubelet[2368]: I0508 00:39:54.267544 2368 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:39:54.268091 kubelet[2368]: I0508 00:39:54.268045 2368 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:39:54.268315 kubelet[2368]: I0508 00:39:54.268081 2368 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:39:54.268859 kubelet[2368]: I0508 00:39:54.268827 2368 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:39:54.268859 kubelet[2368]: I0508 00:39:54.268852 2368 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:39:54.269069 kubelet[2368]: I0508 00:39:54.269039 2368 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:54.287341 kubelet[2368]: I0508 00:39:54.287301 2368 kubelet.go:400] "Attempting to sync node with API server" May 8 00:39:54.287341 kubelet[2368]: I0508 00:39:54.287327 2368 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:39:54.287432 kubelet[2368]: I0508 00:39:54.287353 2368 kubelet.go:312] "Adding apiserver pod source" May 8 00:39:54.287432 kubelet[2368]: I0508 00:39:54.287369 2368 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:39:54.287828 kubelet[2368]: W0508 00:39:54.287768 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:54.287877 kubelet[2368]: E0508 00:39:54.287837 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:54.288194 kubelet[2368]: W0508 00:39:54.288162 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:54.288194 kubelet[2368]: E0508 00:39:54.288196 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:54.290769 kubelet[2368]: I0508 00:39:54.290720 2368 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:39:54.292505 kubelet[2368]: I0508 00:39:54.292458 2368 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:39:54.292646 kubelet[2368]: W0508 00:39:54.292560 2368 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:39:54.293472 kubelet[2368]: I0508 00:39:54.293372 2368 server.go:1264] "Started kubelet" May 8 00:39:54.293544 kubelet[2368]: I0508 00:39:54.293457 2368 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:39:54.294134 kubelet[2368]: I0508 00:39:54.293684 2368 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:39:54.296439 kubelet[2368]: I0508 00:39:54.296408 2368 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:39:54.299862 kubelet[2368]: I0508 00:39:54.299614 2368 server.go:455] "Adding debug handlers to kubelet server" May 8 00:39:54.300946 kubelet[2368]: E0508 00:39:54.300734 2368 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.76:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.76:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d66604d88ea2f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:39:54.293332527 +0000 UTC m=+1.102117176,LastTimestamp:2025-05-08 00:39:54.293332527 +0000 UTC m=+1.102117176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:39:54.302175 kubelet[2368]: I0508 00:39:54.301151 2368 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:39:54.302175 kubelet[2368]: E0508 00:39:54.301617 2368 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:39:54.302175 kubelet[2368]: I0508 00:39:54.301689 2368 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:39:54.302175 kubelet[2368]: I0508 00:39:54.301797 2368 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:39:54.302175 kubelet[2368]: I0508 00:39:54.301855 2368 reconciler.go:26] "Reconciler: start to sync state" May 8 00:39:54.302175 kubelet[2368]: W0508 00:39:54.302142 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:54.302364 kubelet[2368]: E0508 00:39:54.302185 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:54.303128 kubelet[2368]: E0508 00:39:54.302877 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="200ms" May 8 00:39:54.303541 kubelet[2368]: I0508 00:39:54.303503 2368 factory.go:221] Registration of the systemd container factory successfully May 8 00:39:54.303651 kubelet[2368]: I0508 00:39:54.303589 2368 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:39:54.304852 kubelet[2368]: I0508 00:39:54.304827 2368 factory.go:221] Registration of the containerd container factory successfully May 8 00:39:54.319231 kubelet[2368]: I0508 00:39:54.319188 2368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:39:54.321625 kubelet[2368]: I0508 00:39:54.320656 2368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:39:54.321625 kubelet[2368]: I0508 00:39:54.320682 2368 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:39:54.321625 kubelet[2368]: I0508 00:39:54.320698 2368 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:39:54.321625 kubelet[2368]: E0508 00:39:54.320741 2368 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:39:54.322007 kubelet[2368]: W0508 00:39:54.321982 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:54.322237 kubelet[2368]: E0508 00:39:54.322076 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:54.327079 kubelet[2368]: I0508 00:39:54.326905 2368 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:39:54.327079 kubelet[2368]: I0508 00:39:54.326921 2368 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:39:54.327079 kubelet[2368]: I0508 00:39:54.326938 2368 state_mem.go:36] "Initialized new in-memory state store" May 8 00:39:54.403337 kubelet[2368]: I0508 00:39:54.403307 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:54.404053 kubelet[2368]: E0508 00:39:54.404006 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" May 8 00:39:54.421142 kubelet[2368]: E0508 00:39:54.421106 2368 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:39:54.504126 kubelet[2368]: E0508 00:39:54.504063 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="400ms" May 8 00:39:54.605524 kubelet[2368]: I0508 00:39:54.605479 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:54.605869 kubelet[2368]: E0508 00:39:54.605836 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" May 8 00:39:54.621944 kubelet[2368]: E0508 00:39:54.621902 2368 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:39:54.726226 kubelet[2368]: I0508 00:39:54.726155 2368 policy_none.go:49] "None policy: Start" May 8 00:39:54.726971 kubelet[2368]: I0508 00:39:54.726939 2368 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:39:54.727012 kubelet[2368]: I0508 00:39:54.726988 2368 state_mem.go:35] "Initializing new in-memory state store" May 8 00:39:54.798788 kubelet[2368]: I0508 00:39:54.798752 2368 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:39:54.799058 kubelet[2368]: I0508 00:39:54.799022 2368 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:39:54.799155 kubelet[2368]: I0508 00:39:54.799142 2368 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:39:54.800605 kubelet[2368]: E0508 00:39:54.800585 2368 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:39:54.905162 kubelet[2368]: E0508 00:39:54.905024 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="800ms" May 8 00:39:55.008055 kubelet[2368]: I0508 00:39:55.008023 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:55.008467 kubelet[2368]: E0508 00:39:55.008417 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" May 8 00:39:55.022564 kubelet[2368]: I0508 00:39:55.022487 2368 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:39:55.023921 kubelet[2368]: I0508 00:39:55.023883 2368 topology_manager.go:215] "Topology Admit Handler" podUID="a529b3d703ccb6aa04ec261a0f6d57c1" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:39:55.025068 kubelet[2368]: I0508 00:39:55.025035 2368 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:39:55.106740 kubelet[2368]: I0508 00:39:55.106684 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:55.106740 kubelet[2368]: I0508 00:39:55.106749 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:55.106925 kubelet[2368]: I0508 00:39:55.106777 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a529b3d703ccb6aa04ec261a0f6d57c1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a529b3d703ccb6aa04ec261a0f6d57c1\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:55.106925 kubelet[2368]: I0508 00:39:55.106795 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a529b3d703ccb6aa04ec261a0f6d57c1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a529b3d703ccb6aa04ec261a0f6d57c1\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:55.106925 kubelet[2368]: I0508 00:39:55.106811 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:55.106925 kubelet[2368]: I0508 00:39:55.106827 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:55.106925 kubelet[2368]: I0508 00:39:55.106843 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:39:55.107087 kubelet[2368]: I0508 00:39:55.106888 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a529b3d703ccb6aa04ec261a0f6d57c1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a529b3d703ccb6aa04ec261a0f6d57c1\") " pod="kube-system/kube-apiserver-localhost" May 8 00:39:55.107087 kubelet[2368]: I0508 00:39:55.106933 2368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:39:55.215065 kubelet[2368]: W0508 00:39:55.214865 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:55.215065 kubelet[2368]: E0508 00:39:55.214931 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:55.218256 kubelet[2368]: W0508 00:39:55.218212 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:55.218256 kubelet[2368]: E0508 00:39:55.218257 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:55.228892 kubelet[2368]: W0508 00:39:55.228842 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:55.228892 kubelet[2368]: E0508 00:39:55.228877 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:55.329717 kubelet[2368]: E0508 00:39:55.329664 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:55.330286 containerd[1558]: time="2025-05-08T00:39:55.330239141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 8 00:39:55.330623 containerd[1558]: time="2025-05-08T00:39:55.330589965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a529b3d703ccb6aa04ec261a0f6d57c1,Namespace:kube-system,Attempt:0,}" May 8 00:39:55.330670 kubelet[2368]: E0508 00:39:55.330276 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:55.333487 kubelet[2368]: E0508 00:39:55.333467 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:55.333754 containerd[1558]: time="2025-05-08T00:39:55.333727679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 8 00:39:55.544656 kubelet[2368]: W0508 00:39:55.544486 2368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:55.544656 kubelet[2368]: E0508 00:39:55.544554 2368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 8 00:39:55.705883 kubelet[2368]: E0508 00:39:55.705801 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="1.6s" May 8 00:39:55.810206 kubelet[2368]: I0508 00:39:55.810162 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:55.810644 kubelet[2368]: E0508 00:39:55.810599 2368 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" May 8 00:39:55.923532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2746959451.mount: Deactivated successfully. May 8 00:39:55.930899 containerd[1558]: time="2025-05-08T00:39:55.930849594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:55.935222 containerd[1558]: time="2025-05-08T00:39:55.935172046Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:39:55.937618 containerd[1558]: time="2025-05-08T00:39:55.937584682Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:55.939367 containerd[1558]: time="2025-05-08T00:39:55.939327017Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:55.940463 containerd[1558]: time="2025-05-08T00:39:55.940370369Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:55.941627 containerd[1558]: time="2025-05-08T00:39:55.941582626Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:39:55.942814 containerd[1558]: time="2025-05-08T00:39:55.942777254Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:39:55.944426 containerd[1558]: time="2025-05-08T00:39:55.944393545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:39:55.947260 containerd[1558]: time="2025-05-08T00:39:55.947229147Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.907092ms" May 8 00:39:55.949063 containerd[1558]: time="2025-05-08T00:39:55.949025865Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 615.244884ms" May 8 00:39:55.952796 containerd[1558]: time="2025-05-08T00:39:55.952748482Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 622.111309ms" May 8 00:39:56.086521 containerd[1558]: time="2025-05-08T00:39:56.086154243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:56.086521 containerd[1558]: time="2025-05-08T00:39:56.086222675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:56.086521 containerd[1558]: time="2025-05-08T00:39:56.086233427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:56.086521 containerd[1558]: time="2025-05-08T00:39:56.086333614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:56.087227 containerd[1558]: time="2025-05-08T00:39:56.086772339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:56.087227 containerd[1558]: time="2025-05-08T00:39:56.086809927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:56.087227 containerd[1558]: time="2025-05-08T00:39:56.086840400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:56.088175 containerd[1558]: time="2025-05-08T00:39:56.088050629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:56.091364 containerd[1558]: time="2025-05-08T00:39:56.091289998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:56.091430 containerd[1558]: time="2025-05-08T00:39:56.091368278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:56.091430 containerd[1558]: time="2025-05-08T00:39:56.091395866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:56.091582 containerd[1558]: time="2025-05-08T00:39:56.091494359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:56.147172 containerd[1558]: time="2025-05-08T00:39:56.147122712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4c4e3a52c0a96b3f242d9ddcb02a5f864b36399582a08fceeaea831f1c45416\"" May 8 00:39:56.148280 kubelet[2368]: E0508 00:39:56.148202 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:56.150027 containerd[1558]: time="2025-05-08T00:39:56.149924697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"4488297ce268699189679648ca9f8a7d015e9e98885e538ea9cde4c50ef5bad0\"" May 8 00:39:56.150690 containerd[1558]: time="2025-05-08T00:39:56.150595743Z" level=info msg="CreateContainer within sandbox \"e4c4e3a52c0a96b3f242d9ddcb02a5f864b36399582a08fceeaea831f1c45416\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:39:56.150810 kubelet[2368]: E0508 00:39:56.150750 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:56.152802 containerd[1558]: time="2025-05-08T00:39:56.152536500Z" level=info msg="CreateContainer within sandbox \"4488297ce268699189679648ca9f8a7d015e9e98885e538ea9cde4c50ef5bad0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:39:56.157304 containerd[1558]: time="2025-05-08T00:39:56.157270906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a529b3d703ccb6aa04ec261a0f6d57c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a9beac2da61a629ad4814025a8a88f143deb4dd7a3cc5a3b5d0bb6e227565a4\"" May 8 00:39:56.158834 kubelet[2368]: E0508 00:39:56.158814 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:56.162307 containerd[1558]: time="2025-05-08T00:39:56.162271281Z" level=info msg="CreateContainer within sandbox \"8a9beac2da61a629ad4814025a8a88f143deb4dd7a3cc5a3b5d0bb6e227565a4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:39:56.175579 containerd[1558]: time="2025-05-08T00:39:56.175515979Z" level=info msg="CreateContainer within sandbox \"e4c4e3a52c0a96b3f242d9ddcb02a5f864b36399582a08fceeaea831f1c45416\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aad1480b35fca78f627446ed2b8a359440bec8eec3fd236dbfb4e60b8bd533fb\"" May 8 00:39:56.176273 containerd[1558]: time="2025-05-08T00:39:56.176237769Z" level=info msg="StartContainer for \"aad1480b35fca78f627446ed2b8a359440bec8eec3fd236dbfb4e60b8bd533fb\"" May 8 00:39:56.183756 containerd[1558]: time="2025-05-08T00:39:56.183654032Z" level=info msg="CreateContainer within sandbox \"4488297ce268699189679648ca9f8a7d015e9e98885e538ea9cde4c50ef5bad0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d32449196bbaf3b70312315e27a9e38a2e12781cd4ef89eff00efd135308fc3f\"" May 8 00:39:56.184331 containerd[1558]: time="2025-05-08T00:39:56.184300877Z" level=info msg="StartContainer for \"d32449196bbaf3b70312315e27a9e38a2e12781cd4ef89eff00efd135308fc3f\"" May 8 00:39:56.187873 containerd[1558]: time="2025-05-08T00:39:56.187799532Z" level=info msg="CreateContainer within sandbox \"8a9beac2da61a629ad4814025a8a88f143deb4dd7a3cc5a3b5d0bb6e227565a4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"77b4d0d81a3b4f278a531064d6bdf31f97b13a2f4fcd2861ec2a99e50b282753\"" May 8 00:39:56.188296 containerd[1558]: time="2025-05-08T00:39:56.188276135Z" level=info msg="StartContainer for \"77b4d0d81a3b4f278a531064d6bdf31f97b13a2f4fcd2861ec2a99e50b282753\"" May 8 00:39:56.250830 containerd[1558]: time="2025-05-08T00:39:56.250677974Z" level=info msg="StartContainer for \"aad1480b35fca78f627446ed2b8a359440bec8eec3fd236dbfb4e60b8bd533fb\" returns successfully" May 8 00:39:56.262483 containerd[1558]: time="2025-05-08T00:39:56.262437165Z" level=info msg="StartContainer for \"77b4d0d81a3b4f278a531064d6bdf31f97b13a2f4fcd2861ec2a99e50b282753\" returns successfully" May 8 00:39:56.268129 containerd[1558]: time="2025-05-08T00:39:56.268087091Z" level=info msg="StartContainer for \"d32449196bbaf3b70312315e27a9e38a2e12781cd4ef89eff00efd135308fc3f\" returns successfully" May 8 00:39:56.329921 kubelet[2368]: E0508 00:39:56.329873 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:56.332797 kubelet[2368]: E0508 00:39:56.332134 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:56.334813 kubelet[2368]: E0508 00:39:56.334780 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:57.345550 kubelet[2368]: E0508 00:39:57.344115 2368 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:39:57.347001 kubelet[2368]: E0508 00:39:57.346980 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:57.347822 kubelet[2368]: E0508 00:39:57.347808 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:57.411784 kubelet[2368]: I0508 00:39:57.411750 2368 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:39:57.432363 kubelet[2368]: I0508 00:39:57.432315 2368 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:39:57.443207 kubelet[2368]: E0508 00:39:57.443135 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:39:57.543593 kubelet[2368]: E0508 00:39:57.543528 2368 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:39:58.291240 kubelet[2368]: I0508 00:39:58.291204 2368 apiserver.go:52] "Watching apiserver" May 8 00:39:58.302553 kubelet[2368]: I0508 00:39:58.302500 2368 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:39:59.621705 systemd[1]: Reloading requested from client PID 2641 ('systemctl') (unit session-7.scope)... May 8 00:39:59.621721 systemd[1]: Reloading... May 8 00:39:59.692980 zram_generator::config[2680]: No configuration found. May 8 00:39:59.822213 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:39:59.906477 systemd[1]: Reloading finished in 284 ms. May 8 00:39:59.943594 kubelet[2368]: I0508 00:39:59.943514 2368 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:39:59.943610 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:39:59.966368 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:39:59.966812 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:39:59.973384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:40:00.124553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:40:00.129818 (kubelet)[2735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:40:00.175051 kubelet[2735]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:40:00.175051 kubelet[2735]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:40:00.175051 kubelet[2735]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:40:00.175051 kubelet[2735]: I0508 00:40:00.175017 2735 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:40:00.180215 kubelet[2735]: I0508 00:40:00.180173 2735 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:40:00.180215 kubelet[2735]: I0508 00:40:00.180193 2735 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:40:00.180389 kubelet[2735]: I0508 00:40:00.180339 2735 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:40:00.181504 kubelet[2735]: I0508 00:40:00.181481 2735 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:40:00.182602 kubelet[2735]: I0508 00:40:00.182574 2735 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:40:00.191029 kubelet[2735]: I0508 00:40:00.190232 2735 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:40:00.191029 kubelet[2735]: I0508 00:40:00.190756 2735 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:40:00.191158 kubelet[2735]: I0508 00:40:00.190790 2735 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:40:00.191158 kubelet[2735]: I0508 00:40:00.191093 2735 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:40:00.191158 kubelet[2735]: I0508 00:40:00.191106 2735 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:40:00.191158 kubelet[2735]: I0508 00:40:00.191159 2735 state_mem.go:36] "Initialized new in-memory state store" May 8 00:40:00.191391 kubelet[2735]: I0508 00:40:00.191359 2735 kubelet.go:400] "Attempting to sync node with API server" May 8 00:40:00.191391 kubelet[2735]: I0508 00:40:00.191385 2735 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:40:00.191440 kubelet[2735]: I0508 00:40:00.191413 2735 kubelet.go:312] "Adding apiserver pod source" May 8 00:40:00.191440 kubelet[2735]: I0508 00:40:00.191431 2735 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:40:00.192819 kubelet[2735]: I0508 00:40:00.192793 2735 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:40:00.192982 kubelet[2735]: I0508 00:40:00.192971 2735 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:40:00.193359 kubelet[2735]: I0508 00:40:00.193341 2735 server.go:1264] "Started kubelet" May 8 00:40:00.194853 kubelet[2735]: I0508 00:40:00.194776 2735 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:40:00.195095 kubelet[2735]: I0508 00:40:00.194892 2735 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:40:00.195268 kubelet[2735]: I0508 00:40:00.195250 2735 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:40:00.195298 kubelet[2735]: I0508 00:40:00.195278 2735 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:40:00.196499 kubelet[2735]: I0508 00:40:00.196158 2735 server.go:455] "Adding debug handlers to kubelet server" May 8 00:40:00.198130 kubelet[2735]: I0508 00:40:00.197799 2735 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:40:00.198130 kubelet[2735]: I0508 00:40:00.197886 2735 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:40:00.198130 kubelet[2735]: I0508 00:40:00.198031 2735 reconciler.go:26] "Reconciler: start to sync state" May 8 00:40:00.200603 kubelet[2735]: E0508 00:40:00.199122 2735 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:40:00.200603 kubelet[2735]: I0508 00:40:00.200565 2735 factory.go:221] Registration of the systemd container factory successfully May 8 00:40:00.200818 kubelet[2735]: I0508 00:40:00.200669 2735 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:40:00.203092 kubelet[2735]: E0508 00:40:00.202444 2735 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:40:00.203164 kubelet[2735]: I0508 00:40:00.203135 2735 factory.go:221] Registration of the containerd container factory successfully May 8 00:40:00.207069 kubelet[2735]: I0508 00:40:00.206669 2735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:40:00.208704 kubelet[2735]: I0508 00:40:00.208255 2735 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:40:00.208704 kubelet[2735]: I0508 00:40:00.208284 2735 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:40:00.208704 kubelet[2735]: I0508 00:40:00.208300 2735 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:40:00.208704 kubelet[2735]: E0508 00:40:00.208338 2735 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:40:00.252122 kubelet[2735]: I0508 00:40:00.252091 2735 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:40:00.252122 kubelet[2735]: I0508 00:40:00.252109 2735 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:40:00.252122 kubelet[2735]: I0508 00:40:00.252129 2735 state_mem.go:36] "Initialized new in-memory state store" May 8 00:40:00.252312 kubelet[2735]: I0508 00:40:00.252254 2735 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:40:00.252312 kubelet[2735]: I0508 00:40:00.252264 2735 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:40:00.252312 kubelet[2735]: I0508 00:40:00.252283 2735 policy_none.go:49] "None policy: Start" May 8 00:40:00.253057 kubelet[2735]: I0508 00:40:00.253036 2735 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:40:00.253057 kubelet[2735]: I0508 00:40:00.253058 2735 state_mem.go:35] "Initializing new in-memory state store" May 8 00:40:00.253179 kubelet[2735]: I0508 00:40:00.253164 2735 state_mem.go:75] "Updated machine memory state" May 8 00:40:00.254928 kubelet[2735]: I0508 00:40:00.254904 2735 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:40:00.255285 kubelet[2735]: I0508 00:40:00.255097 2735 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:40:00.255285 kubelet[2735]: I0508 00:40:00.255185 2735 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:40:00.303909 kubelet[2735]: I0508 00:40:00.303880 2735 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:40:00.308843 kubelet[2735]: I0508 00:40:00.308798 2735 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:40:00.308931 kubelet[2735]: I0508 00:40:00.308881 2735 topology_manager.go:215] "Topology Admit Handler" podUID="a529b3d703ccb6aa04ec261a0f6d57c1" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:40:00.308977 kubelet[2735]: I0508 00:40:00.308935 2735 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:40:00.312056 kubelet[2735]: I0508 00:40:00.311418 2735 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 8 00:40:00.312056 kubelet[2735]: I0508 00:40:00.311492 2735 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:40:00.398666 kubelet[2735]: I0508 00:40:00.398622 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a529b3d703ccb6aa04ec261a0f6d57c1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a529b3d703ccb6aa04ec261a0f6d57c1\") " pod="kube-system/kube-apiserver-localhost" May 8 00:40:00.398666 kubelet[2735]: I0508 00:40:00.398668 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:40:00.398942 kubelet[2735]: I0508 00:40:00.398693 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:40:00.398942 kubelet[2735]: I0508 00:40:00.398711 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:40:00.398942 kubelet[2735]: I0508 00:40:00.398729 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a529b3d703ccb6aa04ec261a0f6d57c1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a529b3d703ccb6aa04ec261a0f6d57c1\") " pod="kube-system/kube-apiserver-localhost" May 8 00:40:00.398942 kubelet[2735]: I0508 00:40:00.398764 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a529b3d703ccb6aa04ec261a0f6d57c1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a529b3d703ccb6aa04ec261a0f6d57c1\") " pod="kube-system/kube-apiserver-localhost" May 8 00:40:00.398942 kubelet[2735]: I0508 00:40:00.398780 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:40:00.399080 kubelet[2735]: I0508 00:40:00.398796 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:40:00.399080 kubelet[2735]: I0508 00:40:00.398812 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:40:00.618102 kubelet[2735]: E0508 00:40:00.617743 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:00.618219 kubelet[2735]: E0508 00:40:00.618174 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:00.618219 kubelet[2735]: E0508 00:40:00.618195 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:01.195076 kubelet[2735]: I0508 00:40:01.195035 2735 apiserver.go:52] "Watching apiserver" May 8 00:40:01.199026 kubelet[2735]: I0508 00:40:01.198927 2735 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:40:01.224116 kubelet[2735]: E0508 00:40:01.224083 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:01.224244 kubelet[2735]: E0508 00:40:01.224211 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:01.231358 kubelet[2735]: E0508 00:40:01.231234 2735 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:40:01.231793 kubelet[2735]: E0508 00:40:01.231775 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:01.308183 kubelet[2735]: I0508 00:40:01.308102 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.308082948 podStartE2EDuration="1.308082948s" podCreationTimestamp="2025-05-08 00:40:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:01.27557277 +0000 UTC m=+1.141254758" watchObservedRunningTime="2025-05-08 00:40:01.308082948 +0000 UTC m=+1.173764946" May 8 00:40:01.547329 kubelet[2735]: I0508 00:40:01.546500 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.546479886 podStartE2EDuration="1.546479886s" podCreationTimestamp="2025-05-08 00:40:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:01.308573315 +0000 UTC m=+1.174255313" watchObservedRunningTime="2025-05-08 00:40:01.546479886 +0000 UTC m=+1.412161884" May 8 00:40:01.566138 kubelet[2735]: I0508 00:40:01.566065 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.566041095 podStartE2EDuration="1.566041095s" podCreationTimestamp="2025-05-08 00:40:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:01.547467095 +0000 UTC m=+1.413149093" watchObservedRunningTime="2025-05-08 00:40:01.566041095 +0000 UTC m=+1.431723174" May 8 00:40:02.227755 kubelet[2735]: E0508 00:40:02.227701 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:02.230041 kubelet[2735]: E0508 00:40:02.229992 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:03.105192 kubelet[2735]: E0508 00:40:03.105134 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:05.025794 sudo[1766]: pam_unix(sudo:session): session closed for user root May 8 00:40:05.027797 sshd[1759]: pam_unix(sshd:session): session closed for user core May 8 00:40:05.032596 systemd[1]: sshd@6-10.0.0.76:22-10.0.0.1:55116.service: Deactivated successfully. May 8 00:40:05.035585 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:40:05.036729 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. May 8 00:40:05.037851 systemd-logind[1534]: Removed session 7. May 8 00:40:05.925519 kubelet[2735]: E0508 00:40:05.925466 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:11.599914 kubelet[2735]: E0508 00:40:11.599876 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:12.867054 update_engine[1538]: I20250508 00:40:12.866909 1538 update_attempter.cc:509] Updating boot flags... May 8 00:40:12.895007 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2830) May 8 00:40:12.926575 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2829) May 8 00:40:12.926673 containerd[1558]: time="2025-05-08T00:40:12.925701298Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:40:12.927052 kubelet[2735]: I0508 00:40:12.925217 2735 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:40:12.927052 kubelet[2735]: I0508 00:40:12.925897 2735 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:40:12.967985 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2829) May 8 00:40:13.108702 kubelet[2735]: E0508 00:40:13.108663 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:13.834566 kubelet[2735]: I0508 00:40:13.834323 2735 topology_manager.go:215] "Topology Admit Handler" podUID="2d51d9a1-0634-4ad0-886c-548e122b6409" podNamespace="kube-system" podName="kube-proxy-cp8vp" May 8 00:40:13.880030 kubelet[2735]: I0508 00:40:13.879982 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d51d9a1-0634-4ad0-886c-548e122b6409-xtables-lock\") pod \"kube-proxy-cp8vp\" (UID: \"2d51d9a1-0634-4ad0-886c-548e122b6409\") " pod="kube-system/kube-proxy-cp8vp" May 8 00:40:13.880030 kubelet[2735]: I0508 00:40:13.880021 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d51d9a1-0634-4ad0-886c-548e122b6409-lib-modules\") pod \"kube-proxy-cp8vp\" (UID: \"2d51d9a1-0634-4ad0-886c-548e122b6409\") " pod="kube-system/kube-proxy-cp8vp" May 8 00:40:13.880030 kubelet[2735]: I0508 00:40:13.880039 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d51d9a1-0634-4ad0-886c-548e122b6409-kube-proxy\") pod \"kube-proxy-cp8vp\" (UID: \"2d51d9a1-0634-4ad0-886c-548e122b6409\") " pod="kube-system/kube-proxy-cp8vp" May 8 00:40:13.880219 kubelet[2735]: I0508 00:40:13.880058 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzqqx\" (UniqueName: \"kubernetes.io/projected/2d51d9a1-0634-4ad0-886c-548e122b6409-kube-api-access-hzqqx\") pod \"kube-proxy-cp8vp\" (UID: \"2d51d9a1-0634-4ad0-886c-548e122b6409\") " pod="kube-system/kube-proxy-cp8vp" May 8 00:40:14.139060 kubelet[2735]: E0508 00:40:14.138870 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:14.147155 containerd[1558]: time="2025-05-08T00:40:14.147102654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cp8vp,Uid:2d51d9a1-0634-4ad0-886c-548e122b6409,Namespace:kube-system,Attempt:0,}" May 8 00:40:14.152240 kubelet[2735]: I0508 00:40:14.152195 2735 topology_manager.go:215] "Topology Admit Handler" podUID="30ff84ac-16d0-4c9d-a2f2-c5d0622d0f97" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-9wqlz" May 8 00:40:14.181865 kubelet[2735]: I0508 00:40:14.181831 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lzz2\" (UniqueName: \"kubernetes.io/projected/30ff84ac-16d0-4c9d-a2f2-c5d0622d0f97-kube-api-access-9lzz2\") pod \"tigera-operator-797db67f8-9wqlz\" (UID: \"30ff84ac-16d0-4c9d-a2f2-c5d0622d0f97\") " pod="tigera-operator/tigera-operator-797db67f8-9wqlz" May 8 00:40:14.181865 kubelet[2735]: I0508 00:40:14.181865 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/30ff84ac-16d0-4c9d-a2f2-c5d0622d0f97-var-lib-calico\") pod \"tigera-operator-797db67f8-9wqlz\" (UID: \"30ff84ac-16d0-4c9d-a2f2-c5d0622d0f97\") " pod="tigera-operator/tigera-operator-797db67f8-9wqlz" May 8 00:40:14.366258 containerd[1558]: time="2025-05-08T00:40:14.366129413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:14.366258 containerd[1558]: time="2025-05-08T00:40:14.366223164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:14.366258 containerd[1558]: time="2025-05-08T00:40:14.366239265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:14.366472 containerd[1558]: time="2025-05-08T00:40:14.366383525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:14.407828 containerd[1558]: time="2025-05-08T00:40:14.407693265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cp8vp,Uid:2d51d9a1-0634-4ad0-886c-548e122b6409,Namespace:kube-system,Attempt:0,} returns sandbox id \"554b538dc68a6ae182d95d4a1a86e4408e2c63fc1a10250f0915786042af9bff\"" May 8 00:40:14.408640 kubelet[2735]: E0508 00:40:14.408610 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:14.410709 containerd[1558]: time="2025-05-08T00:40:14.410668563Z" level=info msg="CreateContainer within sandbox \"554b538dc68a6ae182d95d4a1a86e4408e2c63fc1a10250f0915786042af9bff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:40:14.456840 containerd[1558]: time="2025-05-08T00:40:14.456779495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-9wqlz,Uid:30ff84ac-16d0-4c9d-a2f2-c5d0622d0f97,Namespace:tigera-operator,Attempt:0,}" May 8 00:40:14.734705 containerd[1558]: time="2025-05-08T00:40:14.734516129Z" level=info msg="CreateContainer within sandbox \"554b538dc68a6ae182d95d4a1a86e4408e2c63fc1a10250f0915786042af9bff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a4dda9c1053d4c917c5e345c5f88ca9e968084b603d6748519a929aef115065c\"" May 8 00:40:14.735369 containerd[1558]: time="2025-05-08T00:40:14.735265780Z" level=info msg="StartContainer for \"a4dda9c1053d4c917c5e345c5f88ca9e968084b603d6748519a929aef115065c\"" May 8 00:40:14.755207 containerd[1558]: time="2025-05-08T00:40:14.753978055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:14.755207 containerd[1558]: time="2025-05-08T00:40:14.754076045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:14.755207 containerd[1558]: time="2025-05-08T00:40:14.754107516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:14.755207 containerd[1558]: time="2025-05-08T00:40:14.755057094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:14.809359 containerd[1558]: time="2025-05-08T00:40:14.809290946Z" level=info msg="StartContainer for \"a4dda9c1053d4c917c5e345c5f88ca9e968084b603d6748519a929aef115065c\" returns successfully" May 8 00:40:14.811838 containerd[1558]: time="2025-05-08T00:40:14.811729746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-9wqlz,Uid:30ff84ac-16d0-4c9d-a2f2-c5d0622d0f97,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d0ad9e69fd9c8c110f89bdd4668913158c8abc0da9ff5489f15d55ebbe3c38c2\"" May 8 00:40:14.813558 containerd[1558]: time="2025-05-08T00:40:14.813509390Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 00:40:15.248423 kubelet[2735]: E0508 00:40:15.248379 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:15.929771 kubelet[2735]: E0508 00:40:15.929731 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:15.939289 kubelet[2735]: I0508 00:40:15.939222 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cp8vp" podStartSLOduration=2.93920007 podStartE2EDuration="2.93920007s" podCreationTimestamp="2025-05-08 00:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:15.409925293 +0000 UTC m=+15.275607321" watchObservedRunningTime="2025-05-08 00:40:15.93920007 +0000 UTC m=+15.804882068" May 8 00:40:17.050712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2591976986.mount: Deactivated successfully. May 8 00:40:17.688314 containerd[1558]: time="2025-05-08T00:40:17.688260784Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:17.715249 containerd[1558]: time="2025-05-08T00:40:17.715194836Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 8 00:40:17.771065 containerd[1558]: time="2025-05-08T00:40:17.771031472Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:17.820121 containerd[1558]: time="2025-05-08T00:40:17.820046742Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:17.820704 containerd[1558]: time="2025-05-08T00:40:17.820658481Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 3.007109925s" May 8 00:40:17.820766 containerd[1558]: time="2025-05-08T00:40:17.820703478Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 8 00:40:17.836170 containerd[1558]: time="2025-05-08T00:40:17.836141462Z" level=info msg="CreateContainer within sandbox \"d0ad9e69fd9c8c110f89bdd4668913158c8abc0da9ff5489f15d55ebbe3c38c2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 00:40:18.308537 containerd[1558]: time="2025-05-08T00:40:18.308469032Z" level=info msg="CreateContainer within sandbox \"d0ad9e69fd9c8c110f89bdd4668913158c8abc0da9ff5489f15d55ebbe3c38c2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e7c170d9c55a2842776a4219dc1e37bd9d544fdfcfce882d3fd79f14ee335c40\"" May 8 00:40:18.308893 containerd[1558]: time="2025-05-08T00:40:18.308859364Z" level=info msg="StartContainer for \"e7c170d9c55a2842776a4219dc1e37bd9d544fdfcfce882d3fd79f14ee335c40\"" May 8 00:40:18.423230 containerd[1558]: time="2025-05-08T00:40:18.423163310Z" level=info msg="StartContainer for \"e7c170d9c55a2842776a4219dc1e37bd9d544fdfcfce882d3fd79f14ee335c40\" returns successfully" May 8 00:40:19.348101 kubelet[2735]: I0508 00:40:19.348021 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-9wqlz" podStartSLOduration=3.326552573 podStartE2EDuration="6.348001129s" podCreationTimestamp="2025-05-08 00:40:13 +0000 UTC" firstStartedPulling="2025-05-08 00:40:14.812819195 +0000 UTC m=+14.678501193" lastFinishedPulling="2025-05-08 00:40:17.834267751 +0000 UTC m=+17.699949749" observedRunningTime="2025-05-08 00:40:19.347730799 +0000 UTC m=+19.213412807" watchObservedRunningTime="2025-05-08 00:40:19.348001129 +0000 UTC m=+19.213683117" May 8 00:40:21.335399 kubelet[2735]: I0508 00:40:21.335319 2735 topology_manager.go:215] "Topology Admit Handler" podUID="08aec900-452a-4e8f-bb3d-572b19b9d1e3" podNamespace="calico-system" podName="calico-typha-564fdc9c4f-45wwg" May 8 00:40:21.426412 kubelet[2735]: I0508 00:40:21.426339 2735 topology_manager.go:215] "Topology Admit Handler" podUID="7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf" podNamespace="calico-system" podName="calico-node-gwprl" May 8 00:40:21.521072 kubelet[2735]: I0508 00:40:21.520694 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf-lib-modules\") pod \"calico-node-gwprl\" (UID: \"7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf\") " pod="calico-system/calico-node-gwprl" May 8 00:40:21.521072 kubelet[2735]: I0508 00:40:21.520740 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf-var-run-calico\") pod \"calico-node-gwprl\" (UID: \"7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf\") " pod="calico-system/calico-node-gwprl" May 8 00:40:21.521072 kubelet[2735]: I0508 00:40:21.520767 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmshb\" (UniqueName: \"kubernetes.io/projected/08aec900-452a-4e8f-bb3d-572b19b9d1e3-kube-api-access-jmshb\") pod \"calico-typha-564fdc9c4f-45wwg\" (UID: \"08aec900-452a-4e8f-bb3d-572b19b9d1e3\") " pod="calico-system/calico-typha-564fdc9c4f-45wwg" May 8 00:40:21.521072 kubelet[2735]: I0508 00:40:21.520783 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf-policysync\") pod \"calico-node-gwprl\" (UID: \"7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf\") " pod="calico-system/calico-node-gwprl" May 8 00:40:21.521072 kubelet[2735]: I0508 00:40:21.520798 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf-cni-log-dir\") pod \"calico-node-gwprl\" (UID: \"7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf\") " pod="calico-system/calico-node-gwprl" May 8 00:40:21.521392 kubelet[2735]: I0508 00:40:21.520818 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf-tigera-ca-bundle\") pod \"calico-node-gwprl\" (UID: \"7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf\") " pod="calico-system/calico-node-gwprl" May 8 00:40:21.521392 kubelet[2735]: I0508 00:40:21.520833 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf-node-certs\") pod \"calico-node-gwprl\" (UID: \"7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf\") " pod="calico-system/calico-node-gwprl" May 8 00:40:21.521392 kubelet[2735]: I0508 00:40:21.520858 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf-cni-bin-dir\") pod \"calico-node-gwprl\" (UID: \"7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf\") " pod="calico-system/calico-node-gwprl" May 8 00:40:21.521392 kubelet[2735]: I0508 00:40:21.520872 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf-flexvol-driver-host\") pod \"calico-node-gwprl\" (UID: \"7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf\") " pod="calico-system/calico-node-gwprl" May 8 00:40:21.521392 kubelet[2735]: I0508 00:40:21.520889 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf-cni-net-dir\") pod \"calico-node-gwprl\" (UID: \"7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf\") " pod="calico-system/calico-node-gwprl" May 8 00:40:21.521545 kubelet[2735]: I0508 00:40:21.520915 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlhzp\" (UniqueName: \"kubernetes.io/projected/7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf-kube-api-access-tlhzp\") pod \"calico-node-gwprl\" (UID: \"7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf\") " pod="calico-system/calico-node-gwprl" May 8 00:40:21.521545 kubelet[2735]: I0508 00:40:21.520929 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08aec900-452a-4e8f-bb3d-572b19b9d1e3-tigera-ca-bundle\") pod \"calico-typha-564fdc9c4f-45wwg\" (UID: \"08aec900-452a-4e8f-bb3d-572b19b9d1e3\") " pod="calico-system/calico-typha-564fdc9c4f-45wwg" May 8 00:40:21.521545 kubelet[2735]: I0508 00:40:21.520944 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf-var-lib-calico\") pod \"calico-node-gwprl\" (UID: \"7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf\") " pod="calico-system/calico-node-gwprl" May 8 00:40:21.521545 kubelet[2735]: I0508 00:40:21.520976 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/08aec900-452a-4e8f-bb3d-572b19b9d1e3-typha-certs\") pod \"calico-typha-564fdc9c4f-45wwg\" (UID: \"08aec900-452a-4e8f-bb3d-572b19b9d1e3\") " pod="calico-system/calico-typha-564fdc9c4f-45wwg" May 8 00:40:21.521545 kubelet[2735]: I0508 00:40:21.520990 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf-xtables-lock\") pod \"calico-node-gwprl\" (UID: \"7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf\") " pod="calico-system/calico-node-gwprl" May 8 00:40:21.537290 kubelet[2735]: I0508 00:40:21.537232 2735 topology_manager.go:215] "Topology Admit Handler" podUID="401667bd-ccb8-4edb-be5e-e0e65fa30964" podNamespace="calico-system" podName="csi-node-driver-n25d4" May 8 00:40:21.537578 kubelet[2735]: E0508 00:40:21.537547 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n25d4" podUID="401667bd-ccb8-4edb-be5e-e0e65fa30964" May 8 00:40:21.632099 kubelet[2735]: E0508 00:40:21.629349 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.632323 kubelet[2735]: W0508 00:40:21.632303 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.632377 kubelet[2735]: E0508 00:40:21.632347 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.639258 kubelet[2735]: E0508 00:40:21.639231 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.639393 kubelet[2735]: W0508 00:40:21.639374 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.639495 kubelet[2735]: E0508 00:40:21.639476 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.639827 kubelet[2735]: E0508 00:40:21.639803 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.639827 kubelet[2735]: W0508 00:40:21.639823 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.640000 kubelet[2735]: E0508 00:40:21.639837 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.641480 kubelet[2735]: E0508 00:40:21.641458 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.641480 kubelet[2735]: W0508 00:40:21.641479 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.641582 kubelet[2735]: E0508 00:40:21.641500 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.642314 kubelet[2735]: E0508 00:40:21.642289 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.642428 kubelet[2735]: W0508 00:40:21.642413 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.642539 kubelet[2735]: E0508 00:40:21.642521 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.643269 kubelet[2735]: E0508 00:40:21.643253 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.643357 kubelet[2735]: W0508 00:40:21.643342 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.643436 kubelet[2735]: E0508 00:40:21.643422 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.644036 kubelet[2735]: E0508 00:40:21.643725 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.644126 kubelet[2735]: W0508 00:40:21.644110 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.644272 kubelet[2735]: E0508 00:40:21.644257 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.644591 kubelet[2735]: E0508 00:40:21.644577 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.644663 kubelet[2735]: W0508 00:40:21.644649 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.644742 kubelet[2735]: E0508 00:40:21.644728 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.645915 kubelet[2735]: E0508 00:40:21.645697 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.645915 kubelet[2735]: W0508 00:40:21.645714 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.645915 kubelet[2735]: E0508 00:40:21.645731 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.646635 kubelet[2735]: E0508 00:40:21.646608 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.646635 kubelet[2735]: W0508 00:40:21.646623 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.646635 kubelet[2735]: E0508 00:40:21.646635 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.647267 kubelet[2735]: E0508 00:40:21.646926 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.647267 kubelet[2735]: W0508 00:40:21.646940 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.647267 kubelet[2735]: E0508 00:40:21.646983 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.647410 kubelet[2735]: E0508 00:40:21.647344 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.647410 kubelet[2735]: W0508 00:40:21.647355 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.647410 kubelet[2735]: E0508 00:40:21.647366 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.647662 kubelet[2735]: E0508 00:40:21.647643 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.647662 kubelet[2735]: W0508 00:40:21.647658 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.648053 kubelet[2735]: E0508 00:40:21.647669 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.648053 kubelet[2735]: E0508 00:40:21.647941 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.648053 kubelet[2735]: W0508 00:40:21.647989 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.648053 kubelet[2735]: E0508 00:40:21.648001 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.648253 kubelet[2735]: E0508 00:40:21.648217 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.648253 kubelet[2735]: W0508 00:40:21.648235 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.648253 kubelet[2735]: E0508 00:40:21.648246 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.648426 kubelet[2735]: E0508 00:40:21.648394 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:21.649141 containerd[1558]: time="2025-05-08T00:40:21.649020424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-564fdc9c4f-45wwg,Uid:08aec900-452a-4e8f-bb3d-572b19b9d1e3,Namespace:calico-system,Attempt:0,}" May 8 00:40:21.649574 kubelet[2735]: E0508 00:40:21.649168 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.649574 kubelet[2735]: W0508 00:40:21.649180 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.649574 kubelet[2735]: E0508 00:40:21.649192 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.649574 kubelet[2735]: E0508 00:40:21.649410 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.649574 kubelet[2735]: W0508 00:40:21.649421 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.649574 kubelet[2735]: E0508 00:40:21.649430 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.649780 kubelet[2735]: E0508 00:40:21.649717 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.649780 kubelet[2735]: W0508 00:40:21.649728 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.649780 kubelet[2735]: E0508 00:40:21.649753 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.650081 kubelet[2735]: E0508 00:40:21.650062 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.650081 kubelet[2735]: W0508 00:40:21.650079 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.650194 kubelet[2735]: E0508 00:40:21.650091 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.650487 kubelet[2735]: E0508 00:40:21.650469 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.650487 kubelet[2735]: W0508 00:40:21.650484 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.650594 kubelet[2735]: E0508 00:40:21.650496 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.651303 kubelet[2735]: E0508 00:40:21.650843 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.651303 kubelet[2735]: W0508 00:40:21.650865 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.651303 kubelet[2735]: E0508 00:40:21.650877 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.651597 kubelet[2735]: E0508 00:40:21.651573 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.651597 kubelet[2735]: W0508 00:40:21.651593 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.651686 kubelet[2735]: E0508 00:40:21.651607 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.651943 kubelet[2735]: E0508 00:40:21.651926 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.651943 kubelet[2735]: W0508 00:40:21.651940 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.651943 kubelet[2735]: E0508 00:40:21.651970 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.652246 kubelet[2735]: E0508 00:40:21.652230 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.652246 kubelet[2735]: W0508 00:40:21.652245 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.652246 kubelet[2735]: E0508 00:40:21.652257 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.652836 kubelet[2735]: E0508 00:40:21.652520 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.652836 kubelet[2735]: W0508 00:40:21.652532 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.652836 kubelet[2735]: E0508 00:40:21.652543 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.652836 kubelet[2735]: E0508 00:40:21.652792 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.652836 kubelet[2735]: W0508 00:40:21.652803 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.652836 kubelet[2735]: E0508 00:40:21.652813 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.653149 kubelet[2735]: E0508 00:40:21.653080 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.653149 kubelet[2735]: W0508 00:40:21.653091 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.653149 kubelet[2735]: E0508 00:40:21.653102 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.682845 containerd[1558]: time="2025-05-08T00:40:21.682704688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:21.683758 containerd[1558]: time="2025-05-08T00:40:21.683684168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:21.683915 containerd[1558]: time="2025-05-08T00:40:21.683884502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:21.684279 containerd[1558]: time="2025-05-08T00:40:21.684217451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:21.723504 kubelet[2735]: E0508 00:40:21.723458 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.723504 kubelet[2735]: W0508 00:40:21.723489 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.723504 kubelet[2735]: E0508 00:40:21.723517 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.723728 kubelet[2735]: I0508 00:40:21.723554 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/401667bd-ccb8-4edb-be5e-e0e65fa30964-kubelet-dir\") pod \"csi-node-driver-n25d4\" (UID: \"401667bd-ccb8-4edb-be5e-e0e65fa30964\") " pod="calico-system/csi-node-driver-n25d4" May 8 00:40:21.724078 kubelet[2735]: E0508 00:40:21.723791 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.724078 kubelet[2735]: W0508 00:40:21.723813 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.724078 kubelet[2735]: E0508 00:40:21.723833 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.724078 kubelet[2735]: I0508 00:40:21.723864 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/401667bd-ccb8-4edb-be5e-e0e65fa30964-registration-dir\") pod \"csi-node-driver-n25d4\" (UID: \"401667bd-ccb8-4edb-be5e-e0e65fa30964\") " pod="calico-system/csi-node-driver-n25d4" May 8 00:40:21.724543 kubelet[2735]: E0508 00:40:21.724507 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.724596 kubelet[2735]: W0508 00:40:21.724539 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.724596 kubelet[2735]: E0508 00:40:21.724573 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.724911 kubelet[2735]: E0508 00:40:21.724887 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.724911 kubelet[2735]: W0508 00:40:21.724905 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.725021 kubelet[2735]: E0508 00:40:21.724927 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.725375 kubelet[2735]: E0508 00:40:21.725343 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.725375 kubelet[2735]: W0508 00:40:21.725364 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.725471 kubelet[2735]: E0508 00:40:21.725386 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.725471 kubelet[2735]: I0508 00:40:21.725415 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt7wv\" (UniqueName: \"kubernetes.io/projected/401667bd-ccb8-4edb-be5e-e0e65fa30964-kube-api-access-dt7wv\") pod \"csi-node-driver-n25d4\" (UID: \"401667bd-ccb8-4edb-be5e-e0e65fa30964\") " pod="calico-system/csi-node-driver-n25d4" May 8 00:40:21.726316 kubelet[2735]: E0508 00:40:21.725709 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.726316 kubelet[2735]: W0508 00:40:21.725745 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.726316 kubelet[2735]: E0508 00:40:21.725766 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.726316 kubelet[2735]: E0508 00:40:21.726145 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.726316 kubelet[2735]: W0508 00:40:21.726155 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.726316 kubelet[2735]: E0508 00:40:21.726175 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.726863 kubelet[2735]: E0508 00:40:21.726432 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.726863 kubelet[2735]: W0508 00:40:21.726444 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.726863 kubelet[2735]: E0508 00:40:21.726483 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.726863 kubelet[2735]: I0508 00:40:21.726516 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/401667bd-ccb8-4edb-be5e-e0e65fa30964-socket-dir\") pod \"csi-node-driver-n25d4\" (UID: \"401667bd-ccb8-4edb-be5e-e0e65fa30964\") " pod="calico-system/csi-node-driver-n25d4" May 8 00:40:21.726863 kubelet[2735]: E0508 00:40:21.726845 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.726863 kubelet[2735]: W0508 00:40:21.726871 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.727608 kubelet[2735]: E0508 00:40:21.726892 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.727608 kubelet[2735]: E0508 00:40:21.727162 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.727608 kubelet[2735]: W0508 00:40:21.727174 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.727608 kubelet[2735]: E0508 00:40:21.727193 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.727608 kubelet[2735]: E0508 00:40:21.727423 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.727608 kubelet[2735]: W0508 00:40:21.727432 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.727608 kubelet[2735]: E0508 00:40:21.727449 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.727608 kubelet[2735]: I0508 00:40:21.727469 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/401667bd-ccb8-4edb-be5e-e0e65fa30964-varrun\") pod \"csi-node-driver-n25d4\" (UID: \"401667bd-ccb8-4edb-be5e-e0e65fa30964\") " pod="calico-system/csi-node-driver-n25d4" May 8 00:40:21.727878 kubelet[2735]: E0508 00:40:21.727758 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.727878 kubelet[2735]: W0508 00:40:21.727770 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.727949 kubelet[2735]: E0508 00:40:21.727879 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.728136 kubelet[2735]: E0508 00:40:21.728097 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.728136 kubelet[2735]: W0508 00:40:21.728113 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.728233 kubelet[2735]: E0508 00:40:21.728137 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.728409 kubelet[2735]: E0508 00:40:21.728392 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.728409 kubelet[2735]: W0508 00:40:21.728407 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.728475 kubelet[2735]: E0508 00:40:21.728420 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.729080 kubelet[2735]: E0508 00:40:21.728683 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.729080 kubelet[2735]: W0508 00:40:21.728698 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.729080 kubelet[2735]: E0508 00:40:21.728708 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.731913 kubelet[2735]: E0508 00:40:21.731864 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:21.732712 containerd[1558]: time="2025-05-08T00:40:21.732509734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gwprl,Uid:7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf,Namespace:calico-system,Attempt:0,}" May 8 00:40:21.753458 containerd[1558]: time="2025-05-08T00:40:21.753389579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-564fdc9c4f-45wwg,Uid:08aec900-452a-4e8f-bb3d-572b19b9d1e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"33856b3d6185ec95c7a0099cde689fc58a99431e1b9abb7b2d27f6ad5de5fc48\"" May 8 00:40:21.755784 kubelet[2735]: E0508 00:40:21.755240 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:21.756775 containerd[1558]: time="2025-05-08T00:40:21.756735398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 00:40:21.773804 containerd[1558]: time="2025-05-08T00:40:21.773705200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:21.773804 containerd[1558]: time="2025-05-08T00:40:21.773768312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:21.773804 containerd[1558]: time="2025-05-08T00:40:21.773782158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:21.774086 containerd[1558]: time="2025-05-08T00:40:21.773916205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:21.829031 kubelet[2735]: E0508 00:40:21.828939 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.829031 kubelet[2735]: W0508 00:40:21.828985 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.829031 kubelet[2735]: E0508 00:40:21.829007 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.829456 kubelet[2735]: E0508 00:40:21.829342 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.829456 kubelet[2735]: W0508 00:40:21.829354 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.829456 kubelet[2735]: E0508 00:40:21.829382 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.830278 containerd[1558]: time="2025-05-08T00:40:21.829048643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gwprl,Uid:7efa523b-d66e-48fe-ab26-c2ef9ef2f4cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c32cdc7ad3d8bd4315b93bcf86b91de49fbcfdb218b5085242b4af682c29632\"" May 8 00:40:21.830340 kubelet[2735]: E0508 00:40:21.829813 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.830340 kubelet[2735]: W0508 00:40:21.829841 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.830340 kubelet[2735]: E0508 00:40:21.829888 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.830340 kubelet[2735]: E0508 00:40:21.830328 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.830340 kubelet[2735]: W0508 00:40:21.830341 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.830597 kubelet[2735]: E0508 00:40:21.830356 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.831040 kubelet[2735]: E0508 00:40:21.830828 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.831040 kubelet[2735]: W0508 00:40:21.830862 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.831040 kubelet[2735]: E0508 00:40:21.830898 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.831398 kubelet[2735]: E0508 00:40:21.831291 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.831398 kubelet[2735]: W0508 00:40:21.831307 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.831398 kubelet[2735]: E0508 00:40:21.831357 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.831719 kubelet[2735]: E0508 00:40:21.831630 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.831719 kubelet[2735]: W0508 00:40:21.831650 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.831719 kubelet[2735]: E0508 00:40:21.831714 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.832386 kubelet[2735]: E0508 00:40:21.832096 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.832386 kubelet[2735]: W0508 00:40:21.832112 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.832386 kubelet[2735]: E0508 00:40:21.832099 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:21.832386 kubelet[2735]: E0508 00:40:21.832388 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.832386 kubelet[2735]: W0508 00:40:21.832398 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.833089 kubelet[2735]: E0508 00:40:21.832745 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.833089 kubelet[2735]: E0508 00:40:21.832762 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.833319 kubelet[2735]: E0508 00:40:21.833277 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.833894 kubelet[2735]: W0508 00:40:21.833333 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.833894 kubelet[2735]: E0508 00:40:21.833648 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.834058 kubelet[2735]: E0508 00:40:21.834014 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.834058 kubelet[2735]: W0508 00:40:21.834025 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.835006 kubelet[2735]: E0508 00:40:21.834234 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.835006 kubelet[2735]: E0508 00:40:21.834255 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.835006 kubelet[2735]: W0508 00:40:21.834267 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.835006 kubelet[2735]: E0508 00:40:21.834450 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.835391 kubelet[2735]: E0508 00:40:21.835284 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.835391 kubelet[2735]: W0508 00:40:21.835298 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.835612 kubelet[2735]: E0508 00:40:21.835592 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.835979 kubelet[2735]: E0508 00:40:21.835841 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.835979 kubelet[2735]: W0508 00:40:21.835868 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.836078 kubelet[2735]: E0508 00:40:21.836053 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.836265 kubelet[2735]: E0508 00:40:21.836159 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.836265 kubelet[2735]: W0508 00:40:21.836171 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.836265 kubelet[2735]: E0508 00:40:21.836202 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.836636 kubelet[2735]: E0508 00:40:21.836621 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.836750 kubelet[2735]: W0508 00:40:21.836701 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.836907 kubelet[2735]: E0508 00:40:21.836810 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.837070 kubelet[2735]: E0508 00:40:21.837058 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.837214 kubelet[2735]: W0508 00:40:21.837129 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.837280 kubelet[2735]: E0508 00:40:21.837267 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.837412 kubelet[2735]: E0508 00:40:21.837364 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.837412 kubelet[2735]: W0508 00:40:21.837373 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.837412 kubelet[2735]: E0508 00:40:21.837388 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.838621 kubelet[2735]: E0508 00:40:21.838593 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.838621 kubelet[2735]: W0508 00:40:21.838614 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.838929 kubelet[2735]: E0508 00:40:21.838743 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.838929 kubelet[2735]: E0508 00:40:21.838882 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.838929 kubelet[2735]: W0508 00:40:21.838892 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.839069 kubelet[2735]: E0508 00:40:21.838928 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.839745 kubelet[2735]: E0508 00:40:21.839180 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.839745 kubelet[2735]: W0508 00:40:21.839192 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.839745 kubelet[2735]: E0508 00:40:21.839295 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.839745 kubelet[2735]: E0508 00:40:21.839407 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.839745 kubelet[2735]: W0508 00:40:21.839415 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.839745 kubelet[2735]: E0508 00:40:21.839525 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.839745 kubelet[2735]: E0508 00:40:21.839651 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.839745 kubelet[2735]: W0508 00:40:21.839658 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.839745 kubelet[2735]: E0508 00:40:21.839755 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.840078 kubelet[2735]: E0508 00:40:21.839903 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.840078 kubelet[2735]: W0508 00:40:21.839911 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.840078 kubelet[2735]: E0508 00:40:21.839923 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.840280 kubelet[2735]: E0508 00:40:21.840263 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.840280 kubelet[2735]: W0508 00:40:21.840276 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.840357 kubelet[2735]: E0508 00:40:21.840285 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:21.850292 kubelet[2735]: E0508 00:40:21.850233 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:21.850292 kubelet[2735]: W0508 00:40:21.850266 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:21.850292 kubelet[2735]: E0508 00:40:21.850288 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:23.208746 kubelet[2735]: E0508 00:40:23.208701 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n25d4" podUID="401667bd-ccb8-4edb-be5e-e0e65fa30964" May 8 00:40:24.999276 containerd[1558]: time="2025-05-08T00:40:24.999200212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:25.004379 containerd[1558]: time="2025-05-08T00:40:25.004287854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 8 00:40:25.022047 containerd[1558]: time="2025-05-08T00:40:25.021978331Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:25.025174 containerd[1558]: time="2025-05-08T00:40:25.025123094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:25.025684 containerd[1558]: time="2025-05-08T00:40:25.025639121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.268863175s" May 8 00:40:25.025742 containerd[1558]: time="2025-05-08T00:40:25.025684418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 8 00:40:25.034982 containerd[1558]: time="2025-05-08T00:40:25.033012561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 00:40:25.040678 containerd[1558]: time="2025-05-08T00:40:25.040640847Z" level=info msg="CreateContainer within sandbox \"33856b3d6185ec95c7a0099cde689fc58a99431e1b9abb7b2d27f6ad5de5fc48\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 00:40:25.068924 containerd[1558]: time="2025-05-08T00:40:25.068837325Z" level=info msg="CreateContainer within sandbox \"33856b3d6185ec95c7a0099cde689fc58a99431e1b9abb7b2d27f6ad5de5fc48\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dda2b82270630b520c6d1be08bdd3bef1efc4d88dfa96f49ec20ba1d1b2428e8\"" May 8 00:40:25.074672 containerd[1558]: time="2025-05-08T00:40:25.074619271Z" level=info msg="StartContainer for \"dda2b82270630b520c6d1be08bdd3bef1efc4d88dfa96f49ec20ba1d1b2428e8\"" May 8 00:40:25.184315 containerd[1558]: time="2025-05-08T00:40:25.184252121Z" level=info msg="StartContainer for \"dda2b82270630b520c6d1be08bdd3bef1efc4d88dfa96f49ec20ba1d1b2428e8\" returns successfully" May 8 00:40:25.220192 kubelet[2735]: E0508 00:40:25.220134 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n25d4" podUID="401667bd-ccb8-4edb-be5e-e0e65fa30964" May 8 00:40:25.279018 kubelet[2735]: E0508 00:40:25.278782 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:25.379088 kubelet[2735]: E0508 00:40:25.379046 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.379088 kubelet[2735]: W0508 00:40:25.379074 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.379088 kubelet[2735]: E0508 00:40:25.379092 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.379493 kubelet[2735]: E0508 00:40:25.379465 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.379540 kubelet[2735]: W0508 00:40:25.379492 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.379540 kubelet[2735]: E0508 00:40:25.379518 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.379777 kubelet[2735]: E0508 00:40:25.379764 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.379777 kubelet[2735]: W0508 00:40:25.379775 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.379850 kubelet[2735]: E0508 00:40:25.379784 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.380074 kubelet[2735]: E0508 00:40:25.380053 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.380074 kubelet[2735]: W0508 00:40:25.380065 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.380074 kubelet[2735]: E0508 00:40:25.380073 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.380304 kubelet[2735]: E0508 00:40:25.380290 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.380304 kubelet[2735]: W0508 00:40:25.380299 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.380304 kubelet[2735]: E0508 00:40:25.380307 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.380489 kubelet[2735]: E0508 00:40:25.380477 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.380489 kubelet[2735]: W0508 00:40:25.380485 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.380552 kubelet[2735]: E0508 00:40:25.380494 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.380682 kubelet[2735]: E0508 00:40:25.380671 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.380682 kubelet[2735]: W0508 00:40:25.380680 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.380733 kubelet[2735]: E0508 00:40:25.380689 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.380873 kubelet[2735]: E0508 00:40:25.380858 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.380873 kubelet[2735]: W0508 00:40:25.380868 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.380873 kubelet[2735]: E0508 00:40:25.380875 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.381103 kubelet[2735]: E0508 00:40:25.381089 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.381103 kubelet[2735]: W0508 00:40:25.381100 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.381165 kubelet[2735]: E0508 00:40:25.381107 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.381324 kubelet[2735]: E0508 00:40:25.381309 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.381324 kubelet[2735]: W0508 00:40:25.381318 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.381324 kubelet[2735]: E0508 00:40:25.381326 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.381514 kubelet[2735]: E0508 00:40:25.381502 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.381514 kubelet[2735]: W0508 00:40:25.381511 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.381572 kubelet[2735]: E0508 00:40:25.381520 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.381700 kubelet[2735]: E0508 00:40:25.381689 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.381700 kubelet[2735]: W0508 00:40:25.381699 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.381759 kubelet[2735]: E0508 00:40:25.381706 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.381906 kubelet[2735]: E0508 00:40:25.381892 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.381906 kubelet[2735]: W0508 00:40:25.381901 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.381906 kubelet[2735]: E0508 00:40:25.381917 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.382171 kubelet[2735]: E0508 00:40:25.382154 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.382171 kubelet[2735]: W0508 00:40:25.382168 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.382245 kubelet[2735]: E0508 00:40:25.382179 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.382388 kubelet[2735]: E0508 00:40:25.382371 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.382388 kubelet[2735]: W0508 00:40:25.382383 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.382458 kubelet[2735]: E0508 00:40:25.382393 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.461369 kubelet[2735]: E0508 00:40:25.461320 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.461369 kubelet[2735]: W0508 00:40:25.461351 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.461369 kubelet[2735]: E0508 00:40:25.461374 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.461679 kubelet[2735]: E0508 00:40:25.461653 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.461679 kubelet[2735]: W0508 00:40:25.461670 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.461776 kubelet[2735]: E0508 00:40:25.461689 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.461985 kubelet[2735]: E0508 00:40:25.461965 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.462037 kubelet[2735]: W0508 00:40:25.461987 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.462037 kubelet[2735]: E0508 00:40:25.462008 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.462297 kubelet[2735]: E0508 00:40:25.462270 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.462297 kubelet[2735]: W0508 00:40:25.462285 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.462364 kubelet[2735]: E0508 00:40:25.462301 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.462644 kubelet[2735]: E0508 00:40:25.462609 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.462644 kubelet[2735]: W0508 00:40:25.462631 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.462716 kubelet[2735]: E0508 00:40:25.462649 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.462903 kubelet[2735]: E0508 00:40:25.462877 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.462903 kubelet[2735]: W0508 00:40:25.462893 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.463001 kubelet[2735]: E0508 00:40:25.462920 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.463189 kubelet[2735]: E0508 00:40:25.463159 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.463189 kubelet[2735]: W0508 00:40:25.463177 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.463263 kubelet[2735]: E0508 00:40:25.463205 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.463414 kubelet[2735]: E0508 00:40:25.463386 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.463414 kubelet[2735]: W0508 00:40:25.463402 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.463488 kubelet[2735]: E0508 00:40:25.463440 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.463686 kubelet[2735]: E0508 00:40:25.463665 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.463686 kubelet[2735]: W0508 00:40:25.463682 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.463771 kubelet[2735]: E0508 00:40:25.463706 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.464006 kubelet[2735]: E0508 00:40:25.463989 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.464006 kubelet[2735]: W0508 00:40:25.464000 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.464109 kubelet[2735]: E0508 00:40:25.464014 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.464285 kubelet[2735]: E0508 00:40:25.464260 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.464285 kubelet[2735]: W0508 00:40:25.464273 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.464285 kubelet[2735]: E0508 00:40:25.464286 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.464643 kubelet[2735]: E0508 00:40:25.464601 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.464643 kubelet[2735]: W0508 00:40:25.464636 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.464804 kubelet[2735]: E0508 00:40:25.464675 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.464981 kubelet[2735]: E0508 00:40:25.464939 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.464981 kubelet[2735]: W0508 00:40:25.464975 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.465181 kubelet[2735]: E0508 00:40:25.465001 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.465342 kubelet[2735]: E0508 00:40:25.465326 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.465342 kubelet[2735]: W0508 00:40:25.465337 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.465438 kubelet[2735]: E0508 00:40:25.465351 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.465870 kubelet[2735]: E0508 00:40:25.465592 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.465870 kubelet[2735]: W0508 00:40:25.465611 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.465870 kubelet[2735]: E0508 00:40:25.465638 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.466032 kubelet[2735]: E0508 00:40:25.465940 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.466032 kubelet[2735]: W0508 00:40:25.465973 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.466032 kubelet[2735]: E0508 00:40:25.465996 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.466321 kubelet[2735]: E0508 00:40:25.466299 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.466321 kubelet[2735]: W0508 00:40:25.466313 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.466432 kubelet[2735]: E0508 00:40:25.466329 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:25.466580 kubelet[2735]: E0508 00:40:25.466565 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:25.466580 kubelet[2735]: W0508 00:40:25.466576 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:25.466643 kubelet[2735]: E0508 00:40:25.466585 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.274396 kubelet[2735]: I0508 00:40:26.274362 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:26.288112 kubelet[2735]: E0508 00:40:26.288096 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.288112 kubelet[2735]: W0508 00:40:26.288111 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.288191 kubelet[2735]: E0508 00:40:26.288124 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.288350 kubelet[2735]: E0508 00:40:26.288325 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.288350 kubelet[2735]: W0508 00:40:26.288337 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.288350 kubelet[2735]: E0508 00:40:26.288348 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.288589 kubelet[2735]: E0508 00:40:26.288575 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.288589 kubelet[2735]: W0508 00:40:26.288586 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.288645 kubelet[2735]: E0508 00:40:26.288595 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.288788 kubelet[2735]: E0508 00:40:26.288760 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.288788 kubelet[2735]: W0508 00:40:26.288772 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.288788 kubelet[2735]: E0508 00:40:26.288779 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.289128 kubelet[2735]: E0508 00:40:26.289112 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.289128 kubelet[2735]: W0508 00:40:26.289125 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.289198 kubelet[2735]: E0508 00:40:26.289134 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.289326 kubelet[2735]: E0508 00:40:26.289312 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.289326 kubelet[2735]: W0508 00:40:26.289322 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.289393 kubelet[2735]: E0508 00:40:26.289330 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.289535 kubelet[2735]: E0508 00:40:26.289513 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.289535 kubelet[2735]: W0508 00:40:26.289524 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.289535 kubelet[2735]: E0508 00:40:26.289531 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.289704 kubelet[2735]: E0508 00:40:26.289691 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.289704 kubelet[2735]: W0508 00:40:26.289701 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.289755 kubelet[2735]: E0508 00:40:26.289708 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.289903 kubelet[2735]: E0508 00:40:26.289889 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.289903 kubelet[2735]: W0508 00:40:26.289900 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.289991 kubelet[2735]: E0508 00:40:26.289907 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.290109 kubelet[2735]: E0508 00:40:26.290096 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.290109 kubelet[2735]: W0508 00:40:26.290106 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.290157 kubelet[2735]: E0508 00:40:26.290114 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.290299 kubelet[2735]: E0508 00:40:26.290284 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.290299 kubelet[2735]: W0508 00:40:26.290296 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.290347 kubelet[2735]: E0508 00:40:26.290304 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.290466 kubelet[2735]: E0508 00:40:26.290453 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.290466 kubelet[2735]: W0508 00:40:26.290463 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.290508 kubelet[2735]: E0508 00:40:26.290470 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.290631 kubelet[2735]: E0508 00:40:26.290618 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.290631 kubelet[2735]: W0508 00:40:26.290628 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.290677 kubelet[2735]: E0508 00:40:26.290635 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.290797 kubelet[2735]: E0508 00:40:26.290784 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.290797 kubelet[2735]: W0508 00:40:26.290794 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.290842 kubelet[2735]: E0508 00:40:26.290801 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.290989 kubelet[2735]: E0508 00:40:26.290973 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.290989 kubelet[2735]: W0508 00:40:26.290986 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.291062 kubelet[2735]: E0508 00:40:26.290995 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.370294 kubelet[2735]: E0508 00:40:26.370253 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.370294 kubelet[2735]: W0508 00:40:26.370268 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.370294 kubelet[2735]: E0508 00:40:26.370277 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.370494 kubelet[2735]: E0508 00:40:26.370464 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.370494 kubelet[2735]: W0508 00:40:26.370476 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.370494 kubelet[2735]: E0508 00:40:26.370488 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.370719 kubelet[2735]: E0508 00:40:26.370686 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.370719 kubelet[2735]: W0508 00:40:26.370700 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.370719 kubelet[2735]: E0508 00:40:26.370714 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.370918 kubelet[2735]: E0508 00:40:26.370886 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.370918 kubelet[2735]: W0508 00:40:26.370899 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.370918 kubelet[2735]: E0508 00:40:26.370912 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.371149 kubelet[2735]: E0508 00:40:26.371121 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.371149 kubelet[2735]: W0508 00:40:26.371133 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.371149 kubelet[2735]: E0508 00:40:26.371144 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.371339 kubelet[2735]: E0508 00:40:26.371317 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.371339 kubelet[2735]: W0508 00:40:26.371329 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.371339 kubelet[2735]: E0508 00:40:26.371342 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.371554 kubelet[2735]: E0508 00:40:26.371531 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.371554 kubelet[2735]: W0508 00:40:26.371544 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.371554 kubelet[2735]: E0508 00:40:26.371555 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.371744 kubelet[2735]: E0508 00:40:26.371723 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.371744 kubelet[2735]: W0508 00:40:26.371735 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.371744 kubelet[2735]: E0508 00:40:26.371746 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.371940 kubelet[2735]: E0508 00:40:26.371910 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.371940 kubelet[2735]: W0508 00:40:26.371934 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.372044 kubelet[2735]: E0508 00:40:26.371972 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.372287 kubelet[2735]: E0508 00:40:26.372266 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.372287 kubelet[2735]: W0508 00:40:26.372279 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.372369 kubelet[2735]: E0508 00:40:26.372291 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.372480 kubelet[2735]: E0508 00:40:26.372462 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.372480 kubelet[2735]: W0508 00:40:26.372476 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.372564 kubelet[2735]: E0508 00:40:26.372490 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.372669 kubelet[2735]: E0508 00:40:26.372651 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.372669 kubelet[2735]: W0508 00:40:26.372662 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.372740 kubelet[2735]: E0508 00:40:26.372674 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.372854 kubelet[2735]: E0508 00:40:26.372836 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.372854 kubelet[2735]: W0508 00:40:26.372848 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.372941 kubelet[2735]: E0508 00:40:26.372859 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.373064 kubelet[2735]: E0508 00:40:26.373047 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.373064 kubelet[2735]: W0508 00:40:26.373058 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.373141 kubelet[2735]: E0508 00:40:26.373069 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.373332 kubelet[2735]: E0508 00:40:26.373305 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.373332 kubelet[2735]: W0508 00:40:26.373318 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.373332 kubelet[2735]: E0508 00:40:26.373330 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.373517 kubelet[2735]: E0508 00:40:26.373499 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.373517 kubelet[2735]: W0508 00:40:26.373510 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.373586 kubelet[2735]: E0508 00:40:26.373522 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.373693 kubelet[2735]: E0508 00:40:26.373676 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.373693 kubelet[2735]: W0508 00:40:26.373688 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.373766 kubelet[2735]: E0508 00:40:26.373701 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.373870 kubelet[2735]: E0508 00:40:26.373854 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 00:40:26.373870 kubelet[2735]: W0508 00:40:26.373865 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 00:40:26.373870 kubelet[2735]: E0508 00:40:26.373872 2735 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 00:40:26.383812 kubelet[2735]: E0508 00:40:26.383770 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:27.209413 kubelet[2735]: E0508 00:40:27.209351 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n25d4" podUID="401667bd-ccb8-4edb-be5e-e0e65fa30964" May 8 00:40:28.082277 systemd[1]: Started sshd@7-10.0.0.76:22-10.0.0.1:50846.service - OpenSSH per-connection server daemon (10.0.0.1:50846). May 8 00:40:28.132240 sshd[3423]: Accepted publickey for core from 10.0.0.1 port 50846 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:28.134313 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:28.139537 systemd-logind[1534]: New session 8 of user core. May 8 00:40:28.144354 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:40:28.545680 sshd[3423]: pam_unix(sshd:session): session closed for user core May 8 00:40:28.550697 systemd[1]: sshd@7-10.0.0.76:22-10.0.0.1:50846.service: Deactivated successfully. May 8 00:40:28.553260 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:40:28.553890 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. May 8 00:40:28.554827 systemd-logind[1534]: Removed session 8. May 8 00:40:28.742721 containerd[1558]: time="2025-05-08T00:40:28.742650026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:28.743458 containerd[1558]: time="2025-05-08T00:40:28.743394387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 8 00:40:28.744544 containerd[1558]: time="2025-05-08T00:40:28.744513211Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:28.747167 containerd[1558]: time="2025-05-08T00:40:28.747106841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:28.747706 containerd[1558]: time="2025-05-08T00:40:28.747668112Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 3.71460797s" May 8 00:40:28.747706 containerd[1558]: time="2025-05-08T00:40:28.747701726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 8 00:40:28.750245 containerd[1558]: time="2025-05-08T00:40:28.750216525Z" level=info msg="CreateContainer within sandbox \"7c32cdc7ad3d8bd4315b93bcf86b91de49fbcfdb218b5085242b4af682c29632\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 00:40:28.765396 containerd[1558]: time="2025-05-08T00:40:28.765348029Z" level=info msg="CreateContainer within sandbox \"7c32cdc7ad3d8bd4315b93bcf86b91de49fbcfdb218b5085242b4af682c29632\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"91859a76acb603078149cecba186082853b401aac84e696f384392c7c0d057f1\"" May 8 00:40:28.765761 containerd[1558]: time="2025-05-08T00:40:28.765726763Z" level=info msg="StartContainer for \"91859a76acb603078149cecba186082853b401aac84e696f384392c7c0d057f1\"" May 8 00:40:28.850355 containerd[1558]: time="2025-05-08T00:40:28.850309788Z" level=info msg="StartContainer for \"91859a76acb603078149cecba186082853b401aac84e696f384392c7c0d057f1\" returns successfully" May 8 00:40:28.891224 containerd[1558]: time="2025-05-08T00:40:28.889310492Z" level=info msg="shim disconnected" id=91859a76acb603078149cecba186082853b401aac84e696f384392c7c0d057f1 namespace=k8s.io May 8 00:40:28.891224 containerd[1558]: time="2025-05-08T00:40:28.891219254Z" level=warning msg="cleaning up after shim disconnected" id=91859a76acb603078149cecba186082853b401aac84e696f384392c7c0d057f1 namespace=k8s.io May 8 00:40:28.891224 containerd[1558]: time="2025-05-08T00:40:28.891232639Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:40:29.209214 kubelet[2735]: E0508 00:40:29.209034 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n25d4" podUID="401667bd-ccb8-4edb-be5e-e0e65fa30964" May 8 00:40:29.281757 kubelet[2735]: E0508 00:40:29.281719 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:29.282378 containerd[1558]: time="2025-05-08T00:40:29.282343313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 00:40:29.297299 kubelet[2735]: I0508 00:40:29.297137 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-564fdc9c4f-45wwg" podStartSLOduration=5.02677986 podStartE2EDuration="8.297117178s" podCreationTimestamp="2025-05-08 00:40:21 +0000 UTC" firstStartedPulling="2025-05-08 00:40:21.75627234 +0000 UTC m=+21.621954338" lastFinishedPulling="2025-05-08 00:40:25.026609658 +0000 UTC m=+24.892291656" observedRunningTime="2025-05-08 00:40:25.300408656 +0000 UTC m=+25.166090654" watchObservedRunningTime="2025-05-08 00:40:29.297117178 +0000 UTC m=+29.162799176" May 8 00:40:29.761396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91859a76acb603078149cecba186082853b401aac84e696f384392c7c0d057f1-rootfs.mount: Deactivated successfully. May 8 00:40:31.209445 kubelet[2735]: E0508 00:40:31.209370 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n25d4" podUID="401667bd-ccb8-4edb-be5e-e0e65fa30964" May 8 00:40:32.827130 containerd[1558]: time="2025-05-08T00:40:32.827069189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:32.827774 containerd[1558]: time="2025-05-08T00:40:32.827704428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 8 00:40:32.829022 containerd[1558]: time="2025-05-08T00:40:32.828844860Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:32.831687 containerd[1558]: time="2025-05-08T00:40:32.831648338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:32.832429 containerd[1558]: time="2025-05-08T00:40:32.832384339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 3.549997645s" May 8 00:40:32.832494 containerd[1558]: time="2025-05-08T00:40:32.832429346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 8 00:40:32.834458 containerd[1558]: time="2025-05-08T00:40:32.834428522Z" level=info msg="CreateContainer within sandbox \"7c32cdc7ad3d8bd4315b93bcf86b91de49fbcfdb218b5085242b4af682c29632\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 00:40:32.851503 containerd[1558]: time="2025-05-08T00:40:32.851435063Z" level=info msg="CreateContainer within sandbox \"7c32cdc7ad3d8bd4315b93bcf86b91de49fbcfdb218b5085242b4af682c29632\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4050e545a9dbe42b26948f2f17161d8ef411d2633f4f60de4dde48055a213c56\"" May 8 00:40:32.852016 containerd[1558]: time="2025-05-08T00:40:32.851945185Z" level=info msg="StartContainer for \"4050e545a9dbe42b26948f2f17161d8ef411d2633f4f60de4dde48055a213c56\"" May 8 00:40:32.916817 containerd[1558]: time="2025-05-08T00:40:32.916776166Z" level=info msg="StartContainer for \"4050e545a9dbe42b26948f2f17161d8ef411d2633f4f60de4dde48055a213c56\" returns successfully" May 8 00:40:33.209180 kubelet[2735]: E0508 00:40:33.209048 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n25d4" podUID="401667bd-ccb8-4edb-be5e-e0e65fa30964" May 8 00:40:33.289134 kubelet[2735]: E0508 00:40:33.289107 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:33.556212 systemd[1]: Started sshd@8-10.0.0.76:22-10.0.0.1:50858.service - OpenSSH per-connection server daemon (10.0.0.1:50858). May 8 00:40:33.596500 sshd[3596]: Accepted publickey for core from 10.0.0.1 port 50858 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:33.598633 sshd[3596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:33.602576 systemd-logind[1534]: New session 9 of user core. May 8 00:40:33.611611 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:40:33.953204 sshd[3596]: pam_unix(sshd:session): session closed for user core May 8 00:40:33.957669 systemd[1]: sshd@8-10.0.0.76:22-10.0.0.1:50858.service: Deactivated successfully. May 8 00:40:33.961444 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:40:33.962649 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. May 8 00:40:33.963796 systemd-logind[1534]: Removed session 9. May 8 00:40:34.291070 kubelet[2735]: E0508 00:40:34.290906 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:34.394493 containerd[1558]: time="2025-05-08T00:40:34.394426826Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:40:34.415897 kubelet[2735]: I0508 00:40:34.415812 2735 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:40:34.419185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4050e545a9dbe42b26948f2f17161d8ef411d2633f4f60de4dde48055a213c56-rootfs.mount: Deactivated successfully. May 8 00:40:34.423637 containerd[1558]: time="2025-05-08T00:40:34.423507868Z" level=info msg="shim disconnected" id=4050e545a9dbe42b26948f2f17161d8ef411d2633f4f60de4dde48055a213c56 namespace=k8s.io May 8 00:40:34.423797 containerd[1558]: time="2025-05-08T00:40:34.423643446Z" level=warning msg="cleaning up after shim disconnected" id=4050e545a9dbe42b26948f2f17161d8ef411d2633f4f60de4dde48055a213c56 namespace=k8s.io May 8 00:40:34.423797 containerd[1558]: time="2025-05-08T00:40:34.423653765Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:40:34.442083 kubelet[2735]: I0508 00:40:34.441396 2735 topology_manager.go:215] "Topology Admit Handler" podUID="21aa72be-6d70-4094-91d7-d01c91cb809c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8dhxq" May 8 00:40:34.444150 kubelet[2735]: I0508 00:40:34.443713 2735 topology_manager.go:215] "Topology Admit Handler" podUID="b19fe2b2-8765-481d-bbeb-26dd8f95c7c3" podNamespace="calico-system" podName="calico-kube-controllers-579fd86b5c-djj5f" May 8 00:40:34.446511 kubelet[2735]: I0508 00:40:34.446462 2735 topology_manager.go:215] "Topology Admit Handler" podUID="0382afa2-c5cb-48de-9957-0becdf36fe1b" podNamespace="calico-apiserver" podName="calico-apiserver-56d545bd9b-pw449" May 8 00:40:34.446868 kubelet[2735]: I0508 00:40:34.446837 2735 topology_manager.go:215] "Topology Admit Handler" podUID="19690749-3374-43c7-ba3a-50053c69dd38" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vtqkm" May 8 00:40:34.448235 kubelet[2735]: I0508 00:40:34.448218 2735 topology_manager.go:215] "Topology Admit Handler" podUID="341e9812-ca08-4b8f-b246-09289190e736" podNamespace="calico-apiserver" podName="calico-apiserver-56d545bd9b-bs27k" May 8 00:40:34.526118 kubelet[2735]: I0508 00:40:34.526063 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21aa72be-6d70-4094-91d7-d01c91cb809c-config-volume\") pod \"coredns-7db6d8ff4d-8dhxq\" (UID: \"21aa72be-6d70-4094-91d7-d01c91cb809c\") " pod="kube-system/coredns-7db6d8ff4d-8dhxq" May 8 00:40:34.526295 kubelet[2735]: I0508 00:40:34.526165 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvl2k\" (UniqueName: \"kubernetes.io/projected/21aa72be-6d70-4094-91d7-d01c91cb809c-kube-api-access-cvl2k\") pod \"coredns-7db6d8ff4d-8dhxq\" (UID: \"21aa72be-6d70-4094-91d7-d01c91cb809c\") " pod="kube-system/coredns-7db6d8ff4d-8dhxq" May 8 00:40:34.526295 kubelet[2735]: I0508 00:40:34.526193 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b19fe2b2-8765-481d-bbeb-26dd8f95c7c3-tigera-ca-bundle\") pod \"calico-kube-controllers-579fd86b5c-djj5f\" (UID: \"b19fe2b2-8765-481d-bbeb-26dd8f95c7c3\") " pod="calico-system/calico-kube-controllers-579fd86b5c-djj5f" May 8 00:40:34.526295 kubelet[2735]: I0508 00:40:34.526210 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4h2r\" (UniqueName: \"kubernetes.io/projected/b19fe2b2-8765-481d-bbeb-26dd8f95c7c3-kube-api-access-h4h2r\") pod \"calico-kube-controllers-579fd86b5c-djj5f\" (UID: \"b19fe2b2-8765-481d-bbeb-26dd8f95c7c3\") " pod="calico-system/calico-kube-controllers-579fd86b5c-djj5f" May 8 00:40:34.626652 kubelet[2735]: I0508 00:40:34.626595 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/341e9812-ca08-4b8f-b246-09289190e736-calico-apiserver-certs\") pod \"calico-apiserver-56d545bd9b-bs27k\" (UID: \"341e9812-ca08-4b8f-b246-09289190e736\") " pod="calico-apiserver/calico-apiserver-56d545bd9b-bs27k" May 8 00:40:34.626652 kubelet[2735]: I0508 00:40:34.626656 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19690749-3374-43c7-ba3a-50053c69dd38-config-volume\") pod \"coredns-7db6d8ff4d-vtqkm\" (UID: \"19690749-3374-43c7-ba3a-50053c69dd38\") " pod="kube-system/coredns-7db6d8ff4d-vtqkm" May 8 00:40:34.626902 kubelet[2735]: I0508 00:40:34.626687 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpjw6\" (UniqueName: \"kubernetes.io/projected/341e9812-ca08-4b8f-b246-09289190e736-kube-api-access-qpjw6\") pod \"calico-apiserver-56d545bd9b-bs27k\" (UID: \"341e9812-ca08-4b8f-b246-09289190e736\") " pod="calico-apiserver/calico-apiserver-56d545bd9b-bs27k" May 8 00:40:34.626902 kubelet[2735]: I0508 00:40:34.626715 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqqn2\" (UniqueName: \"kubernetes.io/projected/0382afa2-c5cb-48de-9957-0becdf36fe1b-kube-api-access-zqqn2\") pod \"calico-apiserver-56d545bd9b-pw449\" (UID: \"0382afa2-c5cb-48de-9957-0becdf36fe1b\") " pod="calico-apiserver/calico-apiserver-56d545bd9b-pw449" May 8 00:40:34.626902 kubelet[2735]: I0508 00:40:34.626809 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjfhv\" (UniqueName: \"kubernetes.io/projected/19690749-3374-43c7-ba3a-50053c69dd38-kube-api-access-vjfhv\") pod \"coredns-7db6d8ff4d-vtqkm\" (UID: \"19690749-3374-43c7-ba3a-50053c69dd38\") " pod="kube-system/coredns-7db6d8ff4d-vtqkm" May 8 00:40:34.626902 kubelet[2735]: I0508 00:40:34.626842 2735 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0382afa2-c5cb-48de-9957-0becdf36fe1b-calico-apiserver-certs\") pod \"calico-apiserver-56d545bd9b-pw449\" (UID: \"0382afa2-c5cb-48de-9957-0becdf36fe1b\") " pod="calico-apiserver/calico-apiserver-56d545bd9b-pw449" May 8 00:40:34.748185 kubelet[2735]: E0508 00:40:34.748136 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:34.748935 containerd[1558]: time="2025-05-08T00:40:34.748890611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8dhxq,Uid:21aa72be-6d70-4094-91d7-d01c91cb809c,Namespace:kube-system,Attempt:0,}" May 8 00:40:34.756522 containerd[1558]: time="2025-05-08T00:40:34.756461944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d545bd9b-pw449,Uid:0382afa2-c5cb-48de-9957-0becdf36fe1b,Namespace:calico-apiserver,Attempt:0,}" May 8 00:40:34.757042 containerd[1558]: time="2025-05-08T00:40:34.756498674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-579fd86b5c-djj5f,Uid:b19fe2b2-8765-481d-bbeb-26dd8f95c7c3,Namespace:calico-system,Attempt:0,}" May 8 00:40:34.757042 containerd[1558]: time="2025-05-08T00:40:34.756826177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d545bd9b-bs27k,Uid:341e9812-ca08-4b8f-b246-09289190e736,Namespace:calico-apiserver,Attempt:0,}" May 8 00:40:34.759104 kubelet[2735]: E0508 00:40:34.759063 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:34.759397 containerd[1558]: time="2025-05-08T00:40:34.759372411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vtqkm,Uid:19690749-3374-43c7-ba3a-50053c69dd38,Namespace:kube-system,Attempt:0,}" May 8 00:40:34.915082 containerd[1558]: time="2025-05-08T00:40:34.914384103Z" level=error msg="Failed to destroy network for sandbox \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.916801 containerd[1558]: time="2025-05-08T00:40:34.916754292Z" level=error msg="Failed to destroy network for sandbox \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.922252 containerd[1558]: time="2025-05-08T00:40:34.922011784Z" level=error msg="Failed to destroy network for sandbox \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.923471 containerd[1558]: time="2025-05-08T00:40:34.923436724Z" level=error msg="encountered an error cleaning up failed sandbox \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.923523 containerd[1558]: time="2025-05-08T00:40:34.923500345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vtqkm,Uid:19690749-3374-43c7-ba3a-50053c69dd38,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.923745 containerd[1558]: time="2025-05-08T00:40:34.923447154Z" level=error msg="encountered an error cleaning up failed sandbox \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.923745 containerd[1558]: time="2025-05-08T00:40:34.923606216Z" level=error msg="encountered an error cleaning up failed sandbox \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.923745 containerd[1558]: time="2025-05-08T00:40:34.923632567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8dhxq,Uid:21aa72be-6d70-4094-91d7-d01c91cb809c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.923745 containerd[1558]: time="2025-05-08T00:40:34.923638959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d545bd9b-bs27k,Uid:341e9812-ca08-4b8f-b246-09289190e736,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.926814 containerd[1558]: time="2025-05-08T00:40:34.926679813Z" level=error msg="Failed to destroy network for sandbox \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.927289 containerd[1558]: time="2025-05-08T00:40:34.927246481Z" level=error msg="encountered an error cleaning up failed sandbox \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.927289 containerd[1558]: time="2025-05-08T00:40:34.927282780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d545bd9b-pw449,Uid:0382afa2-c5cb-48de-9957-0becdf36fe1b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.927713 containerd[1558]: time="2025-05-08T00:40:34.927685016Z" level=error msg="Failed to destroy network for sandbox \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.929079 containerd[1558]: time="2025-05-08T00:40:34.929041095Z" level=error msg="encountered an error cleaning up failed sandbox \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.929159 containerd[1558]: time="2025-05-08T00:40:34.929104777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-579fd86b5c-djj5f,Uid:b19fe2b2-8765-481d-bbeb-26dd8f95c7c3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.932333 kubelet[2735]: E0508 00:40:34.932266 2735 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.932333 kubelet[2735]: E0508 00:40:34.932314 2735 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.932333 kubelet[2735]: E0508 00:40:34.932306 2735 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.932587 kubelet[2735]: E0508 00:40:34.932364 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56d545bd9b-bs27k" May 8 00:40:34.932587 kubelet[2735]: E0508 00:40:34.932355 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vtqkm" May 8 00:40:34.932587 kubelet[2735]: E0508 00:40:34.932380 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8dhxq" May 8 00:40:34.932587 kubelet[2735]: E0508 00:40:34.932392 2735 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vtqkm" May 8 00:40:34.932698 kubelet[2735]: E0508 00:40:34.932399 2735 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.932698 kubelet[2735]: E0508 00:40:34.932404 2735 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8dhxq" May 8 00:40:34.932698 kubelet[2735]: E0508 00:40:34.932417 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56d545bd9b-pw449" May 8 00:40:34.932698 kubelet[2735]: E0508 00:40:34.932431 2735 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56d545bd9b-pw449" May 8 00:40:34.932807 kubelet[2735]: E0508 00:40:34.932437 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vtqkm_kube-system(19690749-3374-43c7-ba3a-50053c69dd38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vtqkm_kube-system(19690749-3374-43c7-ba3a-50053c69dd38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vtqkm" podUID="19690749-3374-43c7-ba3a-50053c69dd38" May 8 00:40:34.932807 kubelet[2735]: E0508 00:40:34.932449 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8dhxq_kube-system(21aa72be-6d70-4094-91d7-d01c91cb809c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8dhxq_kube-system(21aa72be-6d70-4094-91d7-d01c91cb809c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8dhxq" podUID="21aa72be-6d70-4094-91d7-d01c91cb809c" May 8 00:40:34.932807 kubelet[2735]: E0508 00:40:34.932385 2735 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56d545bd9b-bs27k" May 8 00:40:34.932939 kubelet[2735]: E0508 00:40:34.932475 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56d545bd9b-pw449_calico-apiserver(0382afa2-c5cb-48de-9957-0becdf36fe1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56d545bd9b-pw449_calico-apiserver(0382afa2-c5cb-48de-9957-0becdf36fe1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56d545bd9b-pw449" podUID="0382afa2-c5cb-48de-9957-0becdf36fe1b" May 8 00:40:34.932939 kubelet[2735]: E0508 00:40:34.932495 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56d545bd9b-bs27k_calico-apiserver(341e9812-ca08-4b8f-b246-09289190e736)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56d545bd9b-bs27k_calico-apiserver(341e9812-ca08-4b8f-b246-09289190e736)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56d545bd9b-bs27k" podUID="341e9812-ca08-4b8f-b246-09289190e736" May 8 00:40:34.932939 kubelet[2735]: E0508 00:40:34.932283 2735 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:34.933077 kubelet[2735]: E0508 00:40:34.932525 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-579fd86b5c-djj5f" May 8 00:40:34.933077 kubelet[2735]: E0508 00:40:34.932538 2735 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-579fd86b5c-djj5f" May 8 00:40:34.933077 kubelet[2735]: E0508 00:40:34.932561 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-579fd86b5c-djj5f_calico-system(b19fe2b2-8765-481d-bbeb-26dd8f95c7c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-579fd86b5c-djj5f_calico-system(b19fe2b2-8765-481d-bbeb-26dd8f95c7c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-579fd86b5c-djj5f" podUID="b19fe2b2-8765-481d-bbeb-26dd8f95c7c3" May 8 00:40:35.212546 containerd[1558]: time="2025-05-08T00:40:35.212418906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n25d4,Uid:401667bd-ccb8-4edb-be5e-e0e65fa30964,Namespace:calico-system,Attempt:0,}" May 8 00:40:35.294427 kubelet[2735]: E0508 00:40:35.294124 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:35.294946 kubelet[2735]: I0508 00:40:35.294787 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" May 8 00:40:35.295998 containerd[1558]: time="2025-05-08T00:40:35.295457107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 00:40:35.297269 kubelet[2735]: I0508 00:40:35.297240 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" May 8 00:40:35.298275 containerd[1558]: time="2025-05-08T00:40:35.298237835Z" level=info msg="StopPodSandbox for \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\"" May 8 00:40:35.298465 containerd[1558]: time="2025-05-08T00:40:35.298432485Z" level=info msg="Ensure that sandbox ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd in task-service has been cleanup successfully" May 8 00:40:35.299269 containerd[1558]: time="2025-05-08T00:40:35.299230772Z" level=info msg="StopPodSandbox for \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\"" May 8 00:40:35.299447 containerd[1558]: time="2025-05-08T00:40:35.299424591Z" level=info msg="Ensure that sandbox 8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67 in task-service has been cleanup successfully" May 8 00:40:35.300646 kubelet[2735]: I0508 00:40:35.300622 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" May 8 00:40:35.301634 containerd[1558]: time="2025-05-08T00:40:35.301576152Z" level=info msg="StopPodSandbox for \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\"" May 8 00:40:35.302008 containerd[1558]: time="2025-05-08T00:40:35.301736477Z" level=info msg="Ensure that sandbox b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e in task-service has been cleanup successfully" May 8 00:40:35.302759 kubelet[2735]: I0508 00:40:35.302724 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" May 8 00:40:35.304318 containerd[1558]: time="2025-05-08T00:40:35.304279022Z" level=info msg="StopPodSandbox for \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\"" May 8 00:40:35.304802 containerd[1558]: time="2025-05-08T00:40:35.304778572Z" level=info msg="Ensure that sandbox 488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f in task-service has been cleanup successfully" May 8 00:40:35.305069 kubelet[2735]: I0508 00:40:35.305039 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" May 8 00:40:35.305932 containerd[1558]: time="2025-05-08T00:40:35.305909061Z" level=info msg="StopPodSandbox for \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\"" May 8 00:40:35.307040 containerd[1558]: time="2025-05-08T00:40:35.306766742Z" level=info msg="Ensure that sandbox 2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f in task-service has been cleanup successfully" May 8 00:40:35.349521 containerd[1558]: time="2025-05-08T00:40:35.349473250Z" level=error msg="StopPodSandbox for \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\" failed" error="failed to destroy network for sandbox \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.349806 containerd[1558]: time="2025-05-08T00:40:35.349766659Z" level=error msg="StopPodSandbox for \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\" failed" error="failed to destroy network for sandbox \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.349989 kubelet[2735]: E0508 00:40:35.349927 2735 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" May 8 00:40:35.350077 kubelet[2735]: E0508 00:40:35.350021 2735 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67"} May 8 00:40:35.350109 kubelet[2735]: E0508 00:40:35.349935 2735 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" May 8 00:40:35.350195 kubelet[2735]: E0508 00:40:35.350162 2735 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e"} May 8 00:40:35.350237 kubelet[2735]: E0508 00:40:35.350215 2735 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b19fe2b2-8765-481d-bbeb-26dd8f95c7c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:35.350295 kubelet[2735]: E0508 00:40:35.350245 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b19fe2b2-8765-481d-bbeb-26dd8f95c7c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-579fd86b5c-djj5f" podUID="b19fe2b2-8765-481d-bbeb-26dd8f95c7c3" May 8 00:40:35.350295 kubelet[2735]: E0508 00:40:35.350093 2735 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"341e9812-ca08-4b8f-b246-09289190e736\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:35.350379 kubelet[2735]: E0508 00:40:35.350288 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"341e9812-ca08-4b8f-b246-09289190e736\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56d545bd9b-bs27k" podUID="341e9812-ca08-4b8f-b246-09289190e736" May 8 00:40:35.351724 containerd[1558]: time="2025-05-08T00:40:35.351672432Z" level=error msg="StopPodSandbox for \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\" failed" error="failed to destroy network for sandbox \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.351925 kubelet[2735]: E0508 00:40:35.351886 2735 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" May 8 00:40:35.351925 kubelet[2735]: E0508 00:40:35.351920 2735 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f"} May 8 00:40:35.352233 kubelet[2735]: E0508 00:40:35.351942 2735 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0382afa2-c5cb-48de-9957-0becdf36fe1b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:35.352233 kubelet[2735]: E0508 00:40:35.351989 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0382afa2-c5cb-48de-9957-0becdf36fe1b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56d545bd9b-pw449" podUID="0382afa2-c5cb-48de-9957-0becdf36fe1b" May 8 00:40:35.359003 containerd[1558]: time="2025-05-08T00:40:35.358877904Z" level=error msg="StopPodSandbox for \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\" failed" error="failed to destroy network for sandbox \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.359124 kubelet[2735]: E0508 00:40:35.359073 2735 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" May 8 00:40:35.359190 kubelet[2735]: E0508 00:40:35.359130 2735 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f"} May 8 00:40:35.359190 kubelet[2735]: E0508 00:40:35.359155 2735 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21aa72be-6d70-4094-91d7-d01c91cb809c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:35.359190 kubelet[2735]: E0508 00:40:35.359179 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21aa72be-6d70-4094-91d7-d01c91cb809c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8dhxq" podUID="21aa72be-6d70-4094-91d7-d01c91cb809c" May 8 00:40:35.361670 containerd[1558]: time="2025-05-08T00:40:35.361618525Z" level=error msg="StopPodSandbox for \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\" failed" error="failed to destroy network for sandbox \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.361795 kubelet[2735]: E0508 00:40:35.361747 2735 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" May 8 00:40:35.361842 kubelet[2735]: E0508 00:40:35.361799 2735 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd"} May 8 00:40:35.361842 kubelet[2735]: E0508 00:40:35.361820 2735 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19690749-3374-43c7-ba3a-50053c69dd38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:35.361923 kubelet[2735]: E0508 00:40:35.361844 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19690749-3374-43c7-ba3a-50053c69dd38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vtqkm" podUID="19690749-3374-43c7-ba3a-50053c69dd38" May 8 00:40:35.530487 containerd[1558]: time="2025-05-08T00:40:35.530353391Z" level=error msg="Failed to destroy network for sandbox \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.530907 containerd[1558]: time="2025-05-08T00:40:35.530802796Z" level=error msg="encountered an error cleaning up failed sandbox \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.530907 containerd[1558]: time="2025-05-08T00:40:35.530847631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n25d4,Uid:401667bd-ccb8-4edb-be5e-e0e65fa30964,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.531150 kubelet[2735]: E0508 00:40:35.531111 2735 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:35.531196 kubelet[2735]: E0508 00:40:35.531173 2735 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n25d4" May 8 00:40:35.531238 kubelet[2735]: E0508 00:40:35.531194 2735 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n25d4" May 8 00:40:35.531267 kubelet[2735]: E0508 00:40:35.531237 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n25d4_calico-system(401667bd-ccb8-4edb-be5e-e0e65fa30964)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n25d4_calico-system(401667bd-ccb8-4edb-be5e-e0e65fa30964)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n25d4" podUID="401667bd-ccb8-4edb-be5e-e0e65fa30964" May 8 00:40:35.533456 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29-shm.mount: Deactivated successfully. May 8 00:40:36.307696 kubelet[2735]: I0508 00:40:36.307660 2735 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" May 8 00:40:36.308297 containerd[1558]: time="2025-05-08T00:40:36.308270024Z" level=info msg="StopPodSandbox for \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\"" May 8 00:40:36.308512 containerd[1558]: time="2025-05-08T00:40:36.308494411Z" level=info msg="Ensure that sandbox c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29 in task-service has been cleanup successfully" May 8 00:40:36.333805 containerd[1558]: time="2025-05-08T00:40:36.333754926Z" level=error msg="StopPodSandbox for \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\" failed" error="failed to destroy network for sandbox \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 00:40:36.334037 kubelet[2735]: E0508 00:40:36.333977 2735 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" May 8 00:40:36.334103 kubelet[2735]: E0508 00:40:36.334037 2735 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29"} May 8 00:40:36.334103 kubelet[2735]: E0508 00:40:36.334077 2735 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"401667bd-ccb8-4edb-be5e-e0e65fa30964\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 00:40:36.334183 kubelet[2735]: E0508 00:40:36.334105 2735 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"401667bd-ccb8-4edb-be5e-e0e65fa30964\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n25d4" podUID="401667bd-ccb8-4edb-be5e-e0e65fa30964" May 8 00:40:38.963418 systemd[1]: Started sshd@9-10.0.0.76:22-10.0.0.1:44646.service - OpenSSH per-connection server daemon (10.0.0.1:44646). May 8 00:40:39.011626 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 44646 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:39.013729 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:39.018651 systemd-logind[1534]: New session 10 of user core. May 8 00:40:39.023293 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:40:39.144772 sshd[4008]: pam_unix(sshd:session): session closed for user core May 8 00:40:39.149941 systemd[1]: sshd@9-10.0.0.76:22-10.0.0.1:44646.service: Deactivated successfully. May 8 00:40:39.153018 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. May 8 00:40:39.153822 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:40:39.155742 systemd-logind[1534]: Removed session 10. May 8 00:40:40.743520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2646819738.mount: Deactivated successfully. May 8 00:40:42.223994 containerd[1558]: time="2025-05-08T00:40:42.223903927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:42.229733 containerd[1558]: time="2025-05-08T00:40:42.229672711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 8 00:40:42.233051 containerd[1558]: time="2025-05-08T00:40:42.232996617Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:42.237364 containerd[1558]: time="2025-05-08T00:40:42.237303429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:42.237896 containerd[1558]: time="2025-05-08T00:40:42.237863762Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 6.94235693s" May 8 00:40:42.237941 containerd[1558]: time="2025-05-08T00:40:42.237898878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 8 00:40:42.250761 containerd[1558]: time="2025-05-08T00:40:42.250712860Z" level=info msg="CreateContainer within sandbox \"7c32cdc7ad3d8bd4315b93bcf86b91de49fbcfdb218b5085242b4af682c29632\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 00:40:42.425484 containerd[1558]: time="2025-05-08T00:40:42.425400590Z" level=info msg="CreateContainer within sandbox \"7c32cdc7ad3d8bd4315b93bcf86b91de49fbcfdb218b5085242b4af682c29632\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"034b842faf2e8a1c0f2fa01543af8e8e87ef2a208a881438245982249a37a64c\"" May 8 00:40:42.426110 containerd[1558]: time="2025-05-08T00:40:42.425991981Z" level=info msg="StartContainer for \"034b842faf2e8a1c0f2fa01543af8e8e87ef2a208a881438245982249a37a64c\"" May 8 00:40:42.533101 containerd[1558]: time="2025-05-08T00:40:42.532965699Z" level=info msg="StartContainer for \"034b842faf2e8a1c0f2fa01543af8e8e87ef2a208a881438245982249a37a64c\" returns successfully" May 8 00:40:42.614019 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 00:40:42.614163 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 00:40:43.321994 kubelet[2735]: E0508 00:40:43.321932 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:44.160368 systemd[1]: Started sshd@10-10.0.0.76:22-10.0.0.1:44658.service - OpenSSH per-connection server daemon (10.0.0.1:44658). May 8 00:40:44.200643 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 44658 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:44.202727 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:44.207938 systemd-logind[1534]: New session 11 of user core. May 8 00:40:44.217326 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:40:44.324524 kubelet[2735]: E0508 00:40:44.324480 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:44.345549 kubelet[2735]: I0508 00:40:44.345506 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:44.346361 kubelet[2735]: E0508 00:40:44.346334 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:44.353110 systemd[1]: run-containerd-runc-k8s.io-034b842faf2e8a1c0f2fa01543af8e8e87ef2a208a881438245982249a37a64c-runc.OtwlZs.mount: Deactivated successfully. May 8 00:40:44.380067 kubelet[2735]: I0508 00:40:44.378506 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gwprl" podStartSLOduration=2.973420236 podStartE2EDuration="23.378479499s" podCreationTimestamp="2025-05-08 00:40:21 +0000 UTC" firstStartedPulling="2025-05-08 00:40:21.836024138 +0000 UTC m=+21.701706136" lastFinishedPulling="2025-05-08 00:40:42.241083401 +0000 UTC m=+42.106765399" observedRunningTime="2025-05-08 00:40:43.503177233 +0000 UTC m=+43.368859251" watchObservedRunningTime="2025-05-08 00:40:44.378479499 +0000 UTC m=+44.244161497" May 8 00:40:44.402717 sshd[4117]: pam_unix(sshd:session): session closed for user core May 8 00:40:44.406805 systemd[1]: sshd@10-10.0.0.76:22-10.0.0.1:44658.service: Deactivated successfully. May 8 00:40:44.412043 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:40:44.413947 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. May 8 00:40:44.415231 systemd-logind[1534]: Removed session 11. May 8 00:40:44.737990 kernel: bpftool[4288]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 00:40:44.964611 systemd-networkd[1243]: vxlan.calico: Link UP May 8 00:40:44.964621 systemd-networkd[1243]: vxlan.calico: Gained carrier May 8 00:40:45.326282 kubelet[2735]: E0508 00:40:45.326242 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:46.210279 containerd[1558]: time="2025-05-08T00:40:46.209458247Z" level=info msg="StopPodSandbox for \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\"" May 8 00:40:46.360715 containerd[1558]: 2025-05-08 00:40:46.289 [INFO][4380] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" May 8 00:40:46.360715 containerd[1558]: 2025-05-08 00:40:46.291 [INFO][4380] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" iface="eth0" netns="/var/run/netns/cni-9dd72cab-4435-40e8-d911-7141f87c8b8e" May 8 00:40:46.360715 containerd[1558]: 2025-05-08 00:40:46.291 [INFO][4380] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" iface="eth0" netns="/var/run/netns/cni-9dd72cab-4435-40e8-d911-7141f87c8b8e" May 8 00:40:46.360715 containerd[1558]: 2025-05-08 00:40:46.292 [INFO][4380] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" iface="eth0" netns="/var/run/netns/cni-9dd72cab-4435-40e8-d911-7141f87c8b8e" May 8 00:40:46.360715 containerd[1558]: 2025-05-08 00:40:46.292 [INFO][4380] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" May 8 00:40:46.360715 containerd[1558]: 2025-05-08 00:40:46.292 [INFO][4380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" May 8 00:40:46.360715 containerd[1558]: 2025-05-08 00:40:46.346 [INFO][4390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" HandleID="k8s-pod-network.2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" Workload="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:40:46.360715 containerd[1558]: 2025-05-08 00:40:46.346 [INFO][4390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:46.360715 containerd[1558]: 2025-05-08 00:40:46.347 [INFO][4390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:46.360715 containerd[1558]: 2025-05-08 00:40:46.353 [WARNING][4390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" HandleID="k8s-pod-network.2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" Workload="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:40:46.360715 containerd[1558]: 2025-05-08 00:40:46.353 [INFO][4390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" HandleID="k8s-pod-network.2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" Workload="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:40:46.360715 containerd[1558]: 2025-05-08 00:40:46.355 [INFO][4390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:46.360715 containerd[1558]: 2025-05-08 00:40:46.357 [INFO][4380] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" May 8 00:40:46.361192 containerd[1558]: time="2025-05-08T00:40:46.360889183Z" level=info msg="TearDown network for sandbox \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\" successfully" May 8 00:40:46.361192 containerd[1558]: time="2025-05-08T00:40:46.360913650Z" level=info msg="StopPodSandbox for \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\" returns successfully" May 8 00:40:46.361277 kubelet[2735]: E0508 00:40:46.361244 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:46.362347 containerd[1558]: time="2025-05-08T00:40:46.362323833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8dhxq,Uid:21aa72be-6d70-4094-91d7-d01c91cb809c,Namespace:kube-system,Attempt:1,}" May 8 00:40:46.364146 systemd[1]: run-netns-cni\x2d9dd72cab\x2d4435\x2d40e8\x2dd911\x2d7141f87c8b8e.mount: Deactivated successfully. May 8 00:40:46.873212 systemd-networkd[1243]: vxlan.calico: Gained IPv6LL May 8 00:40:46.918222 systemd-networkd[1243]: cali3211da4053c: Link UP May 8 00:40:46.918466 systemd-networkd[1243]: cali3211da4053c: Gained carrier May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.838 [INFO][4399] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0 coredns-7db6d8ff4d- kube-system 21aa72be-6d70-4094-91d7-d01c91cb809c 857 0 2025-05-08 00:40:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-8dhxq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3211da4053c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dhxq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8dhxq-" May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.839 [INFO][4399] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dhxq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.871 [INFO][4414] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" HandleID="k8s-pod-network.e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" Workload="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.882 [INFO][4414] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" HandleID="k8s-pod-network.e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" Workload="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027c2a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-8dhxq", "timestamp":"2025-05-08 00:40:46.871189409 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.882 [INFO][4414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.882 [INFO][4414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.882 [INFO][4414] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.885 [INFO][4414] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" host="localhost" May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.890 [INFO][4414] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.894 [INFO][4414] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.897 [INFO][4414] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.899 [INFO][4414] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.899 [INFO][4414] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" host="localhost" May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.900 [INFO][4414] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.905 [INFO][4414] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" host="localhost" May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.911 [INFO][4414] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" host="localhost" May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.912 [INFO][4414] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" host="localhost" May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.912 [INFO][4414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:46.934760 containerd[1558]: 2025-05-08 00:40:46.912 [INFO][4414] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" HandleID="k8s-pod-network.e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" Workload="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:40:46.935493 containerd[1558]: 2025-05-08 00:40:46.915 [INFO][4399] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dhxq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"21aa72be-6d70-4094-91d7-d01c91cb809c", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-8dhxq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3211da4053c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:46.935493 containerd[1558]: 2025-05-08 00:40:46.915 [INFO][4399] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dhxq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:40:46.935493 containerd[1558]: 2025-05-08 00:40:46.915 [INFO][4399] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3211da4053c ContainerID="e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dhxq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:40:46.935493 containerd[1558]: 2025-05-08 00:40:46.918 [INFO][4399] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dhxq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:40:46.935493 containerd[1558]: 2025-05-08 00:40:46.919 [INFO][4399] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dhxq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"21aa72be-6d70-4094-91d7-d01c91cb809c", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f", Pod:"coredns-7db6d8ff4d-8dhxq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3211da4053c", MAC:"fe:3f:27:30:95:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:46.935493 containerd[1558]: 2025-05-08 00:40:46.930 [INFO][4399] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8dhxq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:40:46.970096 containerd[1558]: time="2025-05-08T00:40:46.969860106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:46.970096 containerd[1558]: time="2025-05-08T00:40:46.969917043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:46.970096 containerd[1558]: time="2025-05-08T00:40:46.969996464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:46.970325 containerd[1558]: time="2025-05-08T00:40:46.970086554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:47.001401 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:47.029024 containerd[1558]: time="2025-05-08T00:40:47.028941548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8dhxq,Uid:21aa72be-6d70-4094-91d7-d01c91cb809c,Namespace:kube-system,Attempt:1,} returns sandbox id \"e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f\"" May 8 00:40:47.029778 kubelet[2735]: E0508 00:40:47.029747 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:47.031837 containerd[1558]: time="2025-05-08T00:40:47.031798933Z" level=info msg="CreateContainer within sandbox \"e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:40:47.091854 containerd[1558]: time="2025-05-08T00:40:47.091787314Z" level=info msg="CreateContainer within sandbox \"e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9dd9af5b8da3be2870049be98ff317f73ddc1605038cb890f5c5beaf11d458f\"" May 8 00:40:47.092543 containerd[1558]: time="2025-05-08T00:40:47.092478072Z" level=info msg="StartContainer for \"f9dd9af5b8da3be2870049be98ff317f73ddc1605038cb890f5c5beaf11d458f\"" May 8 00:40:47.175407 containerd[1558]: time="2025-05-08T00:40:47.175253639Z" level=info msg="StartContainer for \"f9dd9af5b8da3be2870049be98ff317f73ddc1605038cb890f5c5beaf11d458f\" returns successfully" May 8 00:40:47.351133 kubelet[2735]: E0508 00:40:47.351098 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:47.960116 systemd-networkd[1243]: cali3211da4053c: Gained IPv6LL May 8 00:40:48.209832 containerd[1558]: time="2025-05-08T00:40:48.209448676Z" level=info msg="StopPodSandbox for \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\"" May 8 00:40:48.209832 containerd[1558]: time="2025-05-08T00:40:48.209488382Z" level=info msg="StopPodSandbox for \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\"" May 8 00:40:48.210409 containerd[1558]: time="2025-05-08T00:40:48.209453526Z" level=info msg="StopPodSandbox for \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\"" May 8 00:40:48.279274 kubelet[2735]: I0508 00:40:48.279083 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8dhxq" podStartSLOduration=35.279062545 podStartE2EDuration="35.279062545s" podCreationTimestamp="2025-05-08 00:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:47.388297721 +0000 UTC m=+47.253979739" watchObservedRunningTime="2025-05-08 00:40:48.279062545 +0000 UTC m=+48.144744554" May 8 00:40:48.316808 containerd[1558]: 2025-05-08 00:40:48.275 [INFO][4565] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" May 8 00:40:48.316808 containerd[1558]: 2025-05-08 00:40:48.276 [INFO][4565] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" iface="eth0" netns="/var/run/netns/cni-c5b36f00-9e8a-f602-812c-1423fad25f7c" May 8 00:40:48.316808 containerd[1558]: 2025-05-08 00:40:48.276 [INFO][4565] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" iface="eth0" netns="/var/run/netns/cni-c5b36f00-9e8a-f602-812c-1423fad25f7c" May 8 00:40:48.316808 containerd[1558]: 2025-05-08 00:40:48.277 [INFO][4565] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" iface="eth0" netns="/var/run/netns/cni-c5b36f00-9e8a-f602-812c-1423fad25f7c" May 8 00:40:48.316808 containerd[1558]: 2025-05-08 00:40:48.277 [INFO][4565] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" May 8 00:40:48.316808 containerd[1558]: 2025-05-08 00:40:48.277 [INFO][4565] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" May 8 00:40:48.316808 containerd[1558]: 2025-05-08 00:40:48.302 [INFO][4589] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" HandleID="k8s-pod-network.8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" Workload="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:40:48.316808 containerd[1558]: 2025-05-08 00:40:48.303 [INFO][4589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:48.316808 containerd[1558]: 2025-05-08 00:40:48.303 [INFO][4589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:48.316808 containerd[1558]: 2025-05-08 00:40:48.308 [WARNING][4589] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" HandleID="k8s-pod-network.8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" Workload="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:40:48.316808 containerd[1558]: 2025-05-08 00:40:48.308 [INFO][4589] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" HandleID="k8s-pod-network.8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" Workload="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:40:48.316808 containerd[1558]: 2025-05-08 00:40:48.309 [INFO][4589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:48.316808 containerd[1558]: 2025-05-08 00:40:48.312 [INFO][4565] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" May 8 00:40:48.317480 containerd[1558]: time="2025-05-08T00:40:48.317091065Z" level=info msg="TearDown network for sandbox \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\" successfully" May 8 00:40:48.317480 containerd[1558]: time="2025-05-08T00:40:48.317117755Z" level=info msg="StopPodSandbox for \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\" returns successfully" May 8 00:40:48.318509 containerd[1558]: time="2025-05-08T00:40:48.318128089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d545bd9b-bs27k,Uid:341e9812-ca08-4b8f-b246-09289190e736,Namespace:calico-apiserver,Attempt:1,}" May 8 00:40:48.319892 systemd[1]: run-netns-cni\x2dc5b36f00\x2d9e8a\x2df602\x2d812c\x2d1423fad25f7c.mount: Deactivated successfully. May 8 00:40:48.345563 containerd[1558]: 2025-05-08 00:40:48.277 [INFO][4566] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" May 8 00:40:48.345563 containerd[1558]: 2025-05-08 00:40:48.278 [INFO][4566] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" iface="eth0" netns="/var/run/netns/cni-292f8a9c-df9d-85b5-c561-beff2ecbc26f" May 8 00:40:48.345563 containerd[1558]: 2025-05-08 00:40:48.278 [INFO][4566] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" iface="eth0" netns="/var/run/netns/cni-292f8a9c-df9d-85b5-c561-beff2ecbc26f" May 8 00:40:48.345563 containerd[1558]: 2025-05-08 00:40:48.278 [INFO][4566] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" iface="eth0" netns="/var/run/netns/cni-292f8a9c-df9d-85b5-c561-beff2ecbc26f" May 8 00:40:48.345563 containerd[1558]: 2025-05-08 00:40:48.278 [INFO][4566] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" May 8 00:40:48.345563 containerd[1558]: 2025-05-08 00:40:48.278 [INFO][4566] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" May 8 00:40:48.345563 containerd[1558]: 2025-05-08 00:40:48.306 [INFO][4591] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" HandleID="k8s-pod-network.488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" Workload="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:40:48.345563 containerd[1558]: 2025-05-08 00:40:48.306 [INFO][4591] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:48.345563 containerd[1558]: 2025-05-08 00:40:48.309 [INFO][4591] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:48.345563 containerd[1558]: 2025-05-08 00:40:48.315 [WARNING][4591] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" HandleID="k8s-pod-network.488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" Workload="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:40:48.345563 containerd[1558]: 2025-05-08 00:40:48.315 [INFO][4591] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" HandleID="k8s-pod-network.488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" Workload="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:40:48.345563 containerd[1558]: 2025-05-08 00:40:48.341 [INFO][4591] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:48.345563 containerd[1558]: 2025-05-08 00:40:48.343 [INFO][4566] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" May 8 00:40:48.346146 containerd[1558]: time="2025-05-08T00:40:48.345709182Z" level=info msg="TearDown network for sandbox \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\" successfully" May 8 00:40:48.346146 containerd[1558]: time="2025-05-08T00:40:48.345731324Z" level=info msg="StopPodSandbox for \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\" returns successfully" May 8 00:40:48.346416 containerd[1558]: time="2025-05-08T00:40:48.346396945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d545bd9b-pw449,Uid:0382afa2-c5cb-48de-9957-0becdf36fe1b,Namespace:calico-apiserver,Attempt:1,}" May 8 00:40:48.349228 systemd[1]: run-netns-cni\x2d292f8a9c\x2ddf9d\x2d85b5\x2dc561\x2dbeff2ecbc26f.mount: Deactivated successfully. May 8 00:40:48.352758 kubelet[2735]: E0508 00:40:48.352725 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:48.356866 containerd[1558]: 2025-05-08 00:40:48.277 [INFO][4568] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" May 8 00:40:48.356866 containerd[1558]: 2025-05-08 00:40:48.277 [INFO][4568] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" iface="eth0" netns="/var/run/netns/cni-c1bd2836-77ac-9e77-e30c-61777d9d1988" May 8 00:40:48.356866 containerd[1558]: 2025-05-08 00:40:48.278 [INFO][4568] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" iface="eth0" netns="/var/run/netns/cni-c1bd2836-77ac-9e77-e30c-61777d9d1988" May 8 00:40:48.356866 containerd[1558]: 2025-05-08 00:40:48.279 [INFO][4568] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" iface="eth0" netns="/var/run/netns/cni-c1bd2836-77ac-9e77-e30c-61777d9d1988" May 8 00:40:48.356866 containerd[1558]: 2025-05-08 00:40:48.279 [INFO][4568] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" May 8 00:40:48.356866 containerd[1558]: 2025-05-08 00:40:48.279 [INFO][4568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" May 8 00:40:48.356866 containerd[1558]: 2025-05-08 00:40:48.310 [INFO][4593] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" HandleID="k8s-pod-network.c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" Workload="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:40:48.356866 containerd[1558]: 2025-05-08 00:40:48.310 [INFO][4593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:48.356866 containerd[1558]: 2025-05-08 00:40:48.341 [INFO][4593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:48.356866 containerd[1558]: 2025-05-08 00:40:48.348 [WARNING][4593] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" HandleID="k8s-pod-network.c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" Workload="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:40:48.356866 containerd[1558]: 2025-05-08 00:40:48.348 [INFO][4593] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" HandleID="k8s-pod-network.c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" Workload="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:40:48.356866 containerd[1558]: 2025-05-08 00:40:48.350 [INFO][4593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:48.356866 containerd[1558]: 2025-05-08 00:40:48.352 [INFO][4568] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" May 8 00:40:48.356866 containerd[1558]: time="2025-05-08T00:40:48.356527651Z" level=info msg="TearDown network for sandbox \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\" successfully" May 8 00:40:48.356866 containerd[1558]: time="2025-05-08T00:40:48.356552449Z" level=info msg="StopPodSandbox for \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\" returns successfully" May 8 00:40:48.358597 containerd[1558]: time="2025-05-08T00:40:48.358328784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n25d4,Uid:401667bd-ccb8-4edb-be5e-e0e65fa30964,Namespace:calico-system,Attempt:1,}" May 8 00:40:48.359656 systemd[1]: run-netns-cni\x2dc1bd2836\x2d77ac\x2d9e77\x2de30c\x2d61777d9d1988.mount: Deactivated successfully. May 8 00:40:48.858819 systemd-networkd[1243]: cali9ded22b19e7: Link UP May 8 00:40:48.859188 systemd-networkd[1243]: cali9ded22b19e7: Gained carrier May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.773 [INFO][4618] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0 calico-apiserver-56d545bd9b- calico-apiserver 341e9812-ca08-4b8f-b246-09289190e736 884 0 2025-05-08 00:40:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56d545bd9b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56d545bd9b-bs27k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ded22b19e7 [] []}} ContainerID="96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-bs27k" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-" May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.773 [INFO][4618] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-bs27k" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.809 [INFO][4632] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" HandleID="k8s-pod-network.96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" Workload="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.819 [INFO][4632] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" HandleID="k8s-pod-network.96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" Workload="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003756c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56d545bd9b-bs27k", "timestamp":"2025-05-08 00:40:48.808993992 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.819 [INFO][4632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.819 [INFO][4632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.819 [INFO][4632] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.822 [INFO][4632] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" host="localhost" May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.825 [INFO][4632] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.830 [INFO][4632] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.831 [INFO][4632] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.834 [INFO][4632] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.834 [INFO][4632] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" host="localhost" May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.835 [INFO][4632] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388 May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.841 [INFO][4632] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" host="localhost" May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.851 [INFO][4632] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" host="localhost" May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.851 [INFO][4632] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" host="localhost" May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.851 [INFO][4632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:48.879002 containerd[1558]: 2025-05-08 00:40:48.851 [INFO][4632] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" HandleID="k8s-pod-network.96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" Workload="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:40:48.879597 containerd[1558]: 2025-05-08 00:40:48.855 [INFO][4618] cni-plugin/k8s.go 386: Populated endpoint ContainerID="96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-bs27k" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0", GenerateName:"calico-apiserver-56d545bd9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"341e9812-ca08-4b8f-b246-09289190e736", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d545bd9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56d545bd9b-bs27k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ded22b19e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:48.879597 containerd[1558]: 2025-05-08 00:40:48.856 [INFO][4618] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-bs27k" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:40:48.879597 containerd[1558]: 2025-05-08 00:40:48.856 [INFO][4618] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ded22b19e7 ContainerID="96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-bs27k" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:40:48.879597 containerd[1558]: 2025-05-08 00:40:48.859 [INFO][4618] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-bs27k" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:40:48.879597 containerd[1558]: 2025-05-08 00:40:48.859 [INFO][4618] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-bs27k" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0", GenerateName:"calico-apiserver-56d545bd9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"341e9812-ca08-4b8f-b246-09289190e736", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d545bd9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388", Pod:"calico-apiserver-56d545bd9b-bs27k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ded22b19e7", MAC:"72:03:07:ae:13:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:48.879597 containerd[1558]: 2025-05-08 00:40:48.875 [INFO][4618] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-bs27k" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:40:48.924432 containerd[1558]: time="2025-05-08T00:40:48.924200163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:48.924432 containerd[1558]: time="2025-05-08T00:40:48.924377849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:48.924432 containerd[1558]: time="2025-05-08T00:40:48.924391926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:48.924656 containerd[1558]: time="2025-05-08T00:40:48.924605380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:48.953799 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:48.991496 containerd[1558]: time="2025-05-08T00:40:48.990635128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d545bd9b-bs27k,Uid:341e9812-ca08-4b8f-b246-09289190e736,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388\"" May 8 00:40:48.993042 containerd[1558]: time="2025-05-08T00:40:48.992089403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:40:48.998499 systemd-networkd[1243]: calie585983f6a9: Link UP May 8 00:40:48.999742 systemd-networkd[1243]: calie585983f6a9: Gained carrier May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.854 [INFO][4639] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0 calico-apiserver-56d545bd9b- calico-apiserver 0382afa2-c5cb-48de-9957-0becdf36fe1b 883 0 2025-05-08 00:40:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56d545bd9b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56d545bd9b-pw449 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie585983f6a9 [] []}} ContainerID="bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-pw449" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--pw449-" May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.854 [INFO][4639] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-pw449" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.900 [INFO][4671] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" HandleID="k8s-pod-network.bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" Workload="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.911 [INFO][4671] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" HandleID="k8s-pod-network.bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" Workload="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000365a00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56d545bd9b-pw449", "timestamp":"2025-05-08 00:40:48.900218661 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.911 [INFO][4671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.911 [INFO][4671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.911 [INFO][4671] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.913 [INFO][4671] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" host="localhost" May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.917 [INFO][4671] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.921 [INFO][4671] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.959 [INFO][4671] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.963 [INFO][4671] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.963 [INFO][4671] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" host="localhost" May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.965 [INFO][4671] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.977 [INFO][4671] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" host="localhost" May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.988 [INFO][4671] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" host="localhost" May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.989 [INFO][4671] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" host="localhost" May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.989 [INFO][4671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:49.028288 containerd[1558]: 2025-05-08 00:40:48.989 [INFO][4671] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" HandleID="k8s-pod-network.bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" Workload="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:40:49.029507 containerd[1558]: 2025-05-08 00:40:48.995 [INFO][4639] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-pw449" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0", GenerateName:"calico-apiserver-56d545bd9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0382afa2-c5cb-48de-9957-0becdf36fe1b", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d545bd9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56d545bd9b-pw449", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie585983f6a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:49.029507 containerd[1558]: 2025-05-08 00:40:48.995 [INFO][4639] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-pw449" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:40:49.029507 containerd[1558]: 2025-05-08 00:40:48.995 [INFO][4639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie585983f6a9 ContainerID="bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-pw449" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:40:49.029507 containerd[1558]: 2025-05-08 00:40:48.999 [INFO][4639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-pw449" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:40:49.029507 containerd[1558]: 2025-05-08 00:40:48.999 [INFO][4639] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-pw449" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0", GenerateName:"calico-apiserver-56d545bd9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0382afa2-c5cb-48de-9957-0becdf36fe1b", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d545bd9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf", Pod:"calico-apiserver-56d545bd9b-pw449", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie585983f6a9", MAC:"82:e4:15:88:03:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:49.029507 containerd[1558]: 2025-05-08 00:40:49.023 [INFO][4639] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf" Namespace="calico-apiserver" Pod="calico-apiserver-56d545bd9b-pw449" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:40:49.065900 containerd[1558]: time="2025-05-08T00:40:49.065774306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:49.065900 containerd[1558]: time="2025-05-08T00:40:49.065853266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:49.065900 containerd[1558]: time="2025-05-08T00:40:49.065865659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:49.066138 containerd[1558]: time="2025-05-08T00:40:49.066005554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:49.096238 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:49.106210 systemd-networkd[1243]: cali8c453921ee6: Link UP May 8 00:40:49.106731 systemd-networkd[1243]: cali8c453921ee6: Gained carrier May 8 00:40:49.127751 containerd[1558]: time="2025-05-08T00:40:49.127638189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d545bd9b-pw449,Uid:0382afa2-c5cb-48de-9957-0becdf36fe1b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf\"" May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:48.854 [INFO][4654] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--n25d4-eth0 csi-node-driver- calico-system 401667bd-ccb8-4edb-be5e-e0e65fa30964 882 0 2025-05-08 00:40:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-n25d4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8c453921ee6 [] []}} ContainerID="75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" Namespace="calico-system" Pod="csi-node-driver-n25d4" WorkloadEndpoint="localhost-k8s-csi--node--driver--n25d4-" May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:48.854 [INFO][4654] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" Namespace="calico-system" Pod="csi-node-driver-n25d4" WorkloadEndpoint="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:48.906 [INFO][4676] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" HandleID="k8s-pod-network.75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" Workload="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:48.914 [INFO][4676] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" HandleID="k8s-pod-network.75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" Workload="localhost-k8s-csi--node--driver--n25d4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000375490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-n25d4", "timestamp":"2025-05-08 00:40:48.906430011 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:48.914 [INFO][4676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:48.989 [INFO][4676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:48.989 [INFO][4676] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:48.992 [INFO][4676] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" host="localhost" May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:49.024 [INFO][4676] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:49.029 [INFO][4676] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:49.032 [INFO][4676] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:49.035 [INFO][4676] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:49.035 [INFO][4676] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" host="localhost" May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:49.037 [INFO][4676] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7 May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:49.047 [INFO][4676] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" host="localhost" May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:49.100 [INFO][4676] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" host="localhost" May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:49.100 [INFO][4676] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" host="localhost" May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:49.100 [INFO][4676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:49.136899 containerd[1558]: 2025-05-08 00:40:49.100 [INFO][4676] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" HandleID="k8s-pod-network.75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" Workload="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:40:49.137566 containerd[1558]: 2025-05-08 00:40:49.104 [INFO][4654] cni-plugin/k8s.go 386: Populated endpoint ContainerID="75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" Namespace="calico-system" Pod="csi-node-driver-n25d4" WorkloadEndpoint="localhost-k8s-csi--node--driver--n25d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n25d4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"401667bd-ccb8-4edb-be5e-e0e65fa30964", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-n25d4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c453921ee6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:49.137566 containerd[1558]: 2025-05-08 00:40:49.104 [INFO][4654] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" Namespace="calico-system" Pod="csi-node-driver-n25d4" WorkloadEndpoint="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:40:49.137566 containerd[1558]: 2025-05-08 00:40:49.104 [INFO][4654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c453921ee6 ContainerID="75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" Namespace="calico-system" Pod="csi-node-driver-n25d4" WorkloadEndpoint="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:40:49.137566 containerd[1558]: 2025-05-08 00:40:49.106 [INFO][4654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" Namespace="calico-system" Pod="csi-node-driver-n25d4" WorkloadEndpoint="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:40:49.137566 containerd[1558]: 2025-05-08 00:40:49.106 [INFO][4654] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" Namespace="calico-system" Pod="csi-node-driver-n25d4" WorkloadEndpoint="localhost-k8s-csi--node--driver--n25d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n25d4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"401667bd-ccb8-4edb-be5e-e0e65fa30964", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7", Pod:"csi-node-driver-n25d4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c453921ee6", MAC:"b2:66:19:e5:7f:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:49.137566 containerd[1558]: 2025-05-08 00:40:49.131 [INFO][4654] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7" Namespace="calico-system" Pod="csi-node-driver-n25d4" WorkloadEndpoint="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:40:49.170755 containerd[1558]: time="2025-05-08T00:40:49.170639608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:49.170755 containerd[1558]: time="2025-05-08T00:40:49.170710703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:49.170755 containerd[1558]: time="2025-05-08T00:40:49.170725270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:49.171073 containerd[1558]: time="2025-05-08T00:40:49.170828526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:49.202461 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:49.216093 containerd[1558]: time="2025-05-08T00:40:49.215948116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n25d4,Uid:401667bd-ccb8-4edb-be5e-e0e65fa30964,Namespace:calico-system,Attempt:1,} returns sandbox id \"75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7\"" May 8 00:40:49.358166 kubelet[2735]: E0508 00:40:49.358132 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:49.416176 systemd[1]: Started sshd@11-10.0.0.76:22-10.0.0.1:57934.service - OpenSSH per-connection server daemon (10.0.0.1:57934). May 8 00:40:49.609592 sshd[4854]: Accepted publickey for core from 10.0.0.1 port 57934 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:49.611523 sshd[4854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:49.616666 systemd-logind[1534]: New session 12 of user core. May 8 00:40:49.622318 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:40:49.786739 sshd[4854]: pam_unix(sshd:session): session closed for user core May 8 00:40:49.794396 systemd[1]: Started sshd@12-10.0.0.76:22-10.0.0.1:57942.service - OpenSSH per-connection server daemon (10.0.0.1:57942). May 8 00:40:49.795137 systemd[1]: sshd@11-10.0.0.76:22-10.0.0.1:57934.service: Deactivated successfully. May 8 00:40:49.799023 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. May 8 00:40:49.799658 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:40:49.802884 systemd-logind[1534]: Removed session 12. May 8 00:40:49.834149 sshd[4867]: Accepted publickey for core from 10.0.0.1 port 57942 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:49.836023 sshd[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:49.841280 systemd-logind[1534]: New session 13 of user core. May 8 00:40:49.852255 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:40:50.089495 sshd[4867]: pam_unix(sshd:session): session closed for user core May 8 00:40:50.098037 systemd[1]: Started sshd@13-10.0.0.76:22-10.0.0.1:57958.service - OpenSSH per-connection server daemon (10.0.0.1:57958). May 8 00:40:50.098731 systemd[1]: sshd@12-10.0.0.76:22-10.0.0.1:57942.service: Deactivated successfully. May 8 00:40:50.103495 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:40:50.107404 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. May 8 00:40:50.108917 systemd-logind[1534]: Removed session 13. May 8 00:40:50.136565 sshd[4880]: Accepted publickey for core from 10.0.0.1 port 57958 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:50.138451 sshd[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:50.144237 systemd-logind[1534]: New session 14 of user core. May 8 00:40:50.154427 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:40:50.200160 systemd-networkd[1243]: cali9ded22b19e7: Gained IPv6LL May 8 00:40:50.210371 containerd[1558]: time="2025-05-08T00:40:50.210318816Z" level=info msg="StopPodSandbox for \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\"" May 8 00:40:50.211297 containerd[1558]: time="2025-05-08T00:40:50.211247785Z" level=info msg="StopPodSandbox for \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\"" May 8 00:40:50.264323 systemd-networkd[1243]: calie585983f6a9: Gained IPv6LL May 8 00:40:50.330201 sshd[4880]: pam_unix(sshd:session): session closed for user core May 8 00:40:50.334904 systemd[1]: sshd@13-10.0.0.76:22-10.0.0.1:57958.service: Deactivated successfully. May 8 00:40:50.337562 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:40:50.337600 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. May 8 00:40:50.338610 systemd-logind[1534]: Removed session 14. May 8 00:40:50.382729 containerd[1558]: 2025-05-08 00:40:50.340 [INFO][4928] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" May 8 00:40:50.382729 containerd[1558]: 2025-05-08 00:40:50.340 [INFO][4928] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" iface="eth0" netns="/var/run/netns/cni-9ed5df47-e74b-ee1e-a1d3-27915308eaa7" May 8 00:40:50.382729 containerd[1558]: 2025-05-08 00:40:50.340 [INFO][4928] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" iface="eth0" netns="/var/run/netns/cni-9ed5df47-e74b-ee1e-a1d3-27915308eaa7" May 8 00:40:50.382729 containerd[1558]: 2025-05-08 00:40:50.340 [INFO][4928] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" iface="eth0" netns="/var/run/netns/cni-9ed5df47-e74b-ee1e-a1d3-27915308eaa7" May 8 00:40:50.382729 containerd[1558]: 2025-05-08 00:40:50.340 [INFO][4928] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" May 8 00:40:50.382729 containerd[1558]: 2025-05-08 00:40:50.340 [INFO][4928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" May 8 00:40:50.382729 containerd[1558]: 2025-05-08 00:40:50.369 [INFO][4954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" HandleID="k8s-pod-network.b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" Workload="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:40:50.382729 containerd[1558]: 2025-05-08 00:40:50.369 [INFO][4954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:50.382729 containerd[1558]: 2025-05-08 00:40:50.369 [INFO][4954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:50.382729 containerd[1558]: 2025-05-08 00:40:50.374 [WARNING][4954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" HandleID="k8s-pod-network.b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" Workload="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:40:50.382729 containerd[1558]: 2025-05-08 00:40:50.374 [INFO][4954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" HandleID="k8s-pod-network.b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" Workload="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:40:50.382729 containerd[1558]: 2025-05-08 00:40:50.376 [INFO][4954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:50.382729 containerd[1558]: 2025-05-08 00:40:50.378 [INFO][4928] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" May 8 00:40:50.384122 containerd[1558]: time="2025-05-08T00:40:50.383913166Z" level=info msg="TearDown network for sandbox \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\" successfully" May 8 00:40:50.385216 systemd[1]: run-netns-cni\x2d9ed5df47\x2de74b\x2dee1e\x2da1d3\x2d27915308eaa7.mount: Deactivated successfully. May 8 00:40:50.394361 containerd[1558]: time="2025-05-08T00:40:50.394308708Z" level=info msg="StopPodSandbox for \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\" returns successfully" May 8 00:40:50.395576 containerd[1558]: time="2025-05-08T00:40:50.395551902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-579fd86b5c-djj5f,Uid:b19fe2b2-8765-481d-bbeb-26dd8f95c7c3,Namespace:calico-system,Attempt:1,}" May 8 00:40:50.396182 containerd[1558]: 2025-05-08 00:40:50.348 [INFO][4927] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" May 8 00:40:50.396182 containerd[1558]: 2025-05-08 00:40:50.349 [INFO][4927] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" iface="eth0" netns="/var/run/netns/cni-a78da7e9-c21f-2302-0098-ce8d966c28a3" May 8 00:40:50.396182 containerd[1558]: 2025-05-08 00:40:50.349 [INFO][4927] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" iface="eth0" netns="/var/run/netns/cni-a78da7e9-c21f-2302-0098-ce8d966c28a3" May 8 00:40:50.396182 containerd[1558]: 2025-05-08 00:40:50.349 [INFO][4927] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" iface="eth0" netns="/var/run/netns/cni-a78da7e9-c21f-2302-0098-ce8d966c28a3" May 8 00:40:50.396182 containerd[1558]: 2025-05-08 00:40:50.349 [INFO][4927] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" May 8 00:40:50.396182 containerd[1558]: 2025-05-08 00:40:50.349 [INFO][4927] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" May 8 00:40:50.396182 containerd[1558]: 2025-05-08 00:40:50.372 [INFO][4960] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" HandleID="k8s-pod-network.ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" Workload="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:40:50.396182 containerd[1558]: 2025-05-08 00:40:50.372 [INFO][4960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:50.396182 containerd[1558]: 2025-05-08 00:40:50.376 [INFO][4960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:50.396182 containerd[1558]: 2025-05-08 00:40:50.384 [WARNING][4960] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" HandleID="k8s-pod-network.ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" Workload="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:40:50.396182 containerd[1558]: 2025-05-08 00:40:50.384 [INFO][4960] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" HandleID="k8s-pod-network.ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" Workload="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:40:50.396182 containerd[1558]: 2025-05-08 00:40:50.386 [INFO][4960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:50.396182 containerd[1558]: 2025-05-08 00:40:50.392 [INFO][4927] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" May 8 00:40:50.396662 containerd[1558]: time="2025-05-08T00:40:50.396618483Z" level=info msg="TearDown network for sandbox \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\" successfully" May 8 00:40:50.396662 containerd[1558]: time="2025-05-08T00:40:50.396643832Z" level=info msg="StopPodSandbox for \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\" returns successfully" May 8 00:40:50.397553 kubelet[2735]: E0508 00:40:50.397351 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:50.398035 containerd[1558]: time="2025-05-08T00:40:50.397712014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vtqkm,Uid:19690749-3374-43c7-ba3a-50053c69dd38,Namespace:kube-system,Attempt:1,}" May 8 00:40:50.399685 systemd[1]: run-netns-cni\x2da78da7e9\x2dc21f\x2d2302\x2d0098\x2dce8d966c28a3.mount: Deactivated successfully. May 8 00:40:50.457695 systemd-networkd[1243]: cali8c453921ee6: Gained IPv6LL May 8 00:40:50.779420 systemd-networkd[1243]: cali5d10b4cb8c9: Link UP May 8 00:40:50.780039 systemd-networkd[1243]: cali5d10b4cb8c9: Gained carrier May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.495 [INFO][4971] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0 calico-kube-controllers-579fd86b5c- calico-system b19fe2b2-8765-481d-bbeb-26dd8f95c7c3 925 0 2025-05-08 00:40:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:579fd86b5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-579fd86b5c-djj5f eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5d10b4cb8c9 [] []}} ContainerID="bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" Namespace="calico-system" Pod="calico-kube-controllers-579fd86b5c-djj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-" May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.495 [INFO][4971] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" Namespace="calico-system" Pod="calico-kube-controllers-579fd86b5c-djj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.529 [INFO][4998] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" HandleID="k8s-pod-network.bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" Workload="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.539 [INFO][4998] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" HandleID="k8s-pod-network.bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" Workload="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051160), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-579fd86b5c-djj5f", "timestamp":"2025-05-08 00:40:50.529241144 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.539 [INFO][4998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.539 [INFO][4998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.539 [INFO][4998] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.541 [INFO][4998] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" host="localhost" May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.564 [INFO][4998] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.568 [INFO][4998] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.569 [INFO][4998] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.572 [INFO][4998] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.572 [INFO][4998] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" host="localhost" May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.573 [INFO][4998] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564 May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.610 [INFO][4998] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" host="localhost" May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.774 [INFO][4998] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" host="localhost" May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.775 [INFO][4998] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" host="localhost" May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.775 [INFO][4998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:50.861071 containerd[1558]: 2025-05-08 00:40:50.775 [INFO][4998] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" HandleID="k8s-pod-network.bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" Workload="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:40:50.862118 containerd[1558]: 2025-05-08 00:40:50.777 [INFO][4971] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" Namespace="calico-system" Pod="calico-kube-controllers-579fd86b5c-djj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0", GenerateName:"calico-kube-controllers-579fd86b5c-", Namespace:"calico-system", SelfLink:"", UID:"b19fe2b2-8765-481d-bbeb-26dd8f95c7c3", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"579fd86b5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-579fd86b5c-djj5f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d10b4cb8c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:50.862118 containerd[1558]: 2025-05-08 00:40:50.777 [INFO][4971] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" Namespace="calico-system" Pod="calico-kube-controllers-579fd86b5c-djj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:40:50.862118 containerd[1558]: 2025-05-08 00:40:50.777 [INFO][4971] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d10b4cb8c9 ContainerID="bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" Namespace="calico-system" Pod="calico-kube-controllers-579fd86b5c-djj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:40:50.862118 containerd[1558]: 2025-05-08 00:40:50.779 [INFO][4971] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" Namespace="calico-system" Pod="calico-kube-controllers-579fd86b5c-djj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:40:50.862118 containerd[1558]: 2025-05-08 00:40:50.780 [INFO][4971] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" Namespace="calico-system" Pod="calico-kube-controllers-579fd86b5c-djj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0", GenerateName:"calico-kube-controllers-579fd86b5c-", Namespace:"calico-system", SelfLink:"", UID:"b19fe2b2-8765-481d-bbeb-26dd8f95c7c3", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"579fd86b5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564", Pod:"calico-kube-controllers-579fd86b5c-djj5f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d10b4cb8c9", MAC:"42:14:89:4a:ca:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:50.862118 containerd[1558]: 2025-05-08 00:40:50.858 [INFO][4971] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564" Namespace="calico-system" Pod="calico-kube-controllers-579fd86b5c-djj5f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:40:50.985081 containerd[1558]: time="2025-05-08T00:40:50.984937154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:50.985081 containerd[1558]: time="2025-05-08T00:40:50.985041221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:50.985081 containerd[1558]: time="2025-05-08T00:40:50.985057202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:50.985310 containerd[1558]: time="2025-05-08T00:40:50.985151290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:51.012591 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:51.038516 containerd[1558]: time="2025-05-08T00:40:51.038430303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-579fd86b5c-djj5f,Uid:b19fe2b2-8765-481d-bbeb-26dd8f95c7c3,Namespace:calico-system,Attempt:1,} returns sandbox id \"bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564\"" May 8 00:40:51.130591 systemd-networkd[1243]: cali0daa0386fcf: Link UP May 8 00:40:51.131096 systemd-networkd[1243]: cali0daa0386fcf: Gained carrier May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:50.496 [INFO][4982] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0 coredns-7db6d8ff4d- kube-system 19690749-3374-43c7-ba3a-50053c69dd38 926 0 2025-05-08 00:40:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-vtqkm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0daa0386fcf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vtqkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vtqkm-" May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:50.496 [INFO][4982] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vtqkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:50.533 [INFO][5004] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" HandleID="k8s-pod-network.1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" Workload="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:50.542 [INFO][5004] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" HandleID="k8s-pod-network.1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" Workload="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-vtqkm", "timestamp":"2025-05-08 00:40:50.533325561 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:50.542 [INFO][5004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:50.775 [INFO][5004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:50.775 [INFO][5004] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:50.808 [INFO][5004] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" host="localhost" May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:50.852 [INFO][5004] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:50.856 [INFO][5004] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:50.858 [INFO][5004] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:50.860 [INFO][5004] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:50.861 [INFO][5004] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" host="localhost" May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:50.862 [INFO][5004] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4 May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:51.081 [INFO][5004] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" host="localhost" May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:51.124 [INFO][5004] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" host="localhost" May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:51.124 [INFO][5004] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" host="localhost" May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:51.124 [INFO][5004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:40:51.171553 containerd[1558]: 2025-05-08 00:40:51.124 [INFO][5004] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" HandleID="k8s-pod-network.1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" Workload="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:40:51.172277 containerd[1558]: 2025-05-08 00:40:51.127 [INFO][4982] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vtqkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"19690749-3374-43c7-ba3a-50053c69dd38", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-vtqkm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0daa0386fcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:51.172277 containerd[1558]: 2025-05-08 00:40:51.127 [INFO][4982] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vtqkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:40:51.172277 containerd[1558]: 2025-05-08 00:40:51.127 [INFO][4982] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0daa0386fcf ContainerID="1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vtqkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:40:51.172277 containerd[1558]: 2025-05-08 00:40:51.129 [INFO][4982] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vtqkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:40:51.172277 containerd[1558]: 2025-05-08 00:40:51.130 [INFO][4982] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vtqkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"19690749-3374-43c7-ba3a-50053c69dd38", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4", Pod:"coredns-7db6d8ff4d-vtqkm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0daa0386fcf", MAC:"e6:40:71:95:e0:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:40:51.172277 containerd[1558]: 2025-05-08 00:40:51.168 [INFO][4982] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vtqkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:40:51.457704 containerd[1558]: time="2025-05-08T00:40:51.457269852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:40:51.457704 containerd[1558]: time="2025-05-08T00:40:51.457422962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:40:51.457704 containerd[1558]: time="2025-05-08T00:40:51.457460854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:51.457704 containerd[1558]: time="2025-05-08T00:40:51.457622450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:40:51.523825 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:40:51.566655 containerd[1558]: time="2025-05-08T00:40:51.566603547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vtqkm,Uid:19690749-3374-43c7-ba3a-50053c69dd38,Namespace:kube-system,Attempt:1,} returns sandbox id \"1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4\"" May 8 00:40:51.567360 kubelet[2735]: E0508 00:40:51.567337 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:51.569728 containerd[1558]: time="2025-05-08T00:40:51.569683350Z" level=info msg="CreateContainer within sandbox \"1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:40:51.653851 systemd[1]: run-containerd-runc-k8s.io-1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4-runc.qKXote.mount: Deactivated successfully. May 8 00:40:52.248115 systemd-networkd[1243]: cali0daa0386fcf: Gained IPv6LL May 8 00:40:52.458738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747659673.mount: Deactivated successfully. May 8 00:40:52.696090 systemd-networkd[1243]: cali5d10b4cb8c9: Gained IPv6LL May 8 00:40:52.891545 containerd[1558]: time="2025-05-08T00:40:52.891479126Z" level=info msg="CreateContainer within sandbox \"1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca6495ab88f7630a88ab287e20807b8cb3db4918926324fef670aacf5128f9a7\"" May 8 00:40:52.892297 containerd[1558]: time="2025-05-08T00:40:52.892258762Z" level=info msg="StartContainer for \"ca6495ab88f7630a88ab287e20807b8cb3db4918926324fef670aacf5128f9a7\"" May 8 00:40:52.920043 containerd[1558]: time="2025-05-08T00:40:52.919993253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:52.922006 containerd[1558]: time="2025-05-08T00:40:52.921700487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 8 00:40:52.924560 containerd[1558]: time="2025-05-08T00:40:52.924537299Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:52.939084 containerd[1558]: time="2025-05-08T00:40:52.939008373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:52.939778 containerd[1558]: time="2025-05-08T00:40:52.939747633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 3.947631839s" May 8 00:40:52.939867 containerd[1558]: time="2025-05-08T00:40:52.939851800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:40:52.941066 containerd[1558]: time="2025-05-08T00:40:52.941029821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 00:40:52.944028 containerd[1558]: time="2025-05-08T00:40:52.943998062Z" level=info msg="CreateContainer within sandbox \"96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:40:52.951858 containerd[1558]: time="2025-05-08T00:40:52.951725675Z" level=info msg="StartContainer for \"ca6495ab88f7630a88ab287e20807b8cb3db4918926324fef670aacf5128f9a7\" returns successfully" May 8 00:40:52.963082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2714547951.mount: Deactivated successfully. May 8 00:40:52.987591 containerd[1558]: time="2025-05-08T00:40:52.987530480Z" level=info msg="CreateContainer within sandbox \"96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3f4b0a24bc2b59f48e74e377deff4675aeac571274a947b863af94226f516ce9\"" May 8 00:40:52.990378 containerd[1558]: time="2025-05-08T00:40:52.990219361Z" level=info msg="StartContainer for \"3f4b0a24bc2b59f48e74e377deff4675aeac571274a947b863af94226f516ce9\"" May 8 00:40:53.192006 containerd[1558]: time="2025-05-08T00:40:53.190972402Z" level=info msg="StartContainer for \"3f4b0a24bc2b59f48e74e377deff4675aeac571274a947b863af94226f516ce9\" returns successfully" May 8 00:40:53.376389 kubelet[2735]: E0508 00:40:53.376359 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:53.542131 kubelet[2735]: I0508 00:40:53.540301 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56d545bd9b-bs27k" podStartSLOduration=28.591084061 podStartE2EDuration="32.540265695s" podCreationTimestamp="2025-05-08 00:40:21 +0000 UTC" firstStartedPulling="2025-05-08 00:40:48.991654049 +0000 UTC m=+48.857336047" lastFinishedPulling="2025-05-08 00:40:52.940835683 +0000 UTC m=+52.806517681" observedRunningTime="2025-05-08 00:40:53.492582359 +0000 UTC m=+53.358264357" watchObservedRunningTime="2025-05-08 00:40:53.540265695 +0000 UTC m=+53.405947694" May 8 00:40:54.013893 containerd[1558]: time="2025-05-08T00:40:54.013829528Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:54.033458 containerd[1558]: time="2025-05-08T00:40:54.033383909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 00:40:54.035945 containerd[1558]: time="2025-05-08T00:40:54.035900874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 1.094835666s" May 8 00:40:54.036044 containerd[1558]: time="2025-05-08T00:40:54.035974294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 00:40:54.037185 containerd[1558]: time="2025-05-08T00:40:54.037146893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 00:40:54.038461 containerd[1558]: time="2025-05-08T00:40:54.038415325Z" level=info msg="CreateContainer within sandbox \"bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 00:40:54.326722 containerd[1558]: time="2025-05-08T00:40:54.326671611Z" level=info msg="CreateContainer within sandbox \"bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fba011f5a92549b58491e7b9e0aef0f2910ce4a996997e00cba49ba02b14d1d4\"" May 8 00:40:54.327208 containerd[1558]: time="2025-05-08T00:40:54.327127134Z" level=info msg="StartContainer for \"fba011f5a92549b58491e7b9e0aef0f2910ce4a996997e00cba49ba02b14d1d4\"" May 8 00:40:54.379998 kubelet[2735]: I0508 00:40:54.379918 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:40:54.380835 kubelet[2735]: E0508 00:40:54.380805 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:54.473843 containerd[1558]: time="2025-05-08T00:40:54.473782296Z" level=info msg="StartContainer for \"fba011f5a92549b58491e7b9e0aef0f2910ce4a996997e00cba49ba02b14d1d4\" returns successfully" May 8 00:40:55.340200 systemd[1]: Started sshd@14-10.0.0.76:22-10.0.0.1:57970.service - OpenSSH per-connection server daemon (10.0.0.1:57970). May 8 00:40:55.378987 sshd[5263]: Accepted publickey for core from 10.0.0.1 port 57970 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:40:55.380832 sshd[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:40:55.384871 kubelet[2735]: E0508 00:40:55.384842 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:40:55.386805 systemd-logind[1534]: New session 15 of user core. May 8 00:40:55.392235 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:40:55.406099 kubelet[2735]: I0508 00:40:55.405741 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vtqkm" podStartSLOduration=42.405722397 podStartE2EDuration="42.405722397s" podCreationTimestamp="2025-05-08 00:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:40:53.542747084 +0000 UTC m=+53.408429102" watchObservedRunningTime="2025-05-08 00:40:55.405722397 +0000 UTC m=+55.271404395" May 8 00:40:55.512056 containerd[1558]: time="2025-05-08T00:40:55.511993690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:55.513034 containerd[1558]: time="2025-05-08T00:40:55.512976029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 8 00:40:55.514479 containerd[1558]: time="2025-05-08T00:40:55.514430011Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:55.517155 containerd[1558]: time="2025-05-08T00:40:55.517122719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:55.517911 containerd[1558]: time="2025-05-08T00:40:55.517867669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.480686461s" May 8 00:40:55.517911 containerd[1558]: time="2025-05-08T00:40:55.517896974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 8 00:40:55.519575 containerd[1558]: time="2025-05-08T00:40:55.519378970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 00:40:55.520024 containerd[1558]: time="2025-05-08T00:40:55.519997261Z" level=info msg="CreateContainer within sandbox \"75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 00:40:55.527064 sshd[5263]: pam_unix(sshd:session): session closed for user core May 8 00:40:55.530534 systemd[1]: sshd@14-10.0.0.76:22-10.0.0.1:57970.service: Deactivated successfully. May 8 00:40:55.536429 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:40:55.538263 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. May 8 00:40:55.539139 systemd-logind[1534]: Removed session 15. May 8 00:40:55.543641 containerd[1558]: time="2025-05-08T00:40:55.543596745Z" level=info msg="CreateContainer within sandbox \"75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a34b0e8849f60f7243a5abf64f2f1ce8b7530d3e9f2d54d7249862144ca3b84f\"" May 8 00:40:55.544212 containerd[1558]: time="2025-05-08T00:40:55.544112151Z" level=info msg="StartContainer for \"a34b0e8849f60f7243a5abf64f2f1ce8b7530d3e9f2d54d7249862144ca3b84f\"" May 8 00:40:55.614177 containerd[1558]: time="2025-05-08T00:40:55.614039294Z" level=info msg="StartContainer for \"a34b0e8849f60f7243a5abf64f2f1ce8b7530d3e9f2d54d7249862144ca3b84f\" returns successfully" May 8 00:40:56.517975 kubelet[2735]: I0508 00:40:56.517863 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56d545bd9b-pw449" podStartSLOduration=30.610159477 podStartE2EDuration="35.517839277s" podCreationTimestamp="2025-05-08 00:40:21 +0000 UTC" firstStartedPulling="2025-05-08 00:40:49.129045135 +0000 UTC m=+48.994727133" lastFinishedPulling="2025-05-08 00:40:54.036724935 +0000 UTC m=+53.902406933" observedRunningTime="2025-05-08 00:40:55.406106354 +0000 UTC m=+55.271788352" watchObservedRunningTime="2025-05-08 00:40:56.517839277 +0000 UTC m=+56.383521275" May 8 00:40:57.636215 containerd[1558]: time="2025-05-08T00:40:57.636155691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:57.637186 containerd[1558]: time="2025-05-08T00:40:57.637151496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 8 00:40:57.639001 containerd[1558]: time="2025-05-08T00:40:57.638945582Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:57.641613 containerd[1558]: time="2025-05-08T00:40:57.641551474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:57.642245 containerd[1558]: time="2025-05-08T00:40:57.642211253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.122801865s" May 8 00:40:57.642245 containerd[1558]: time="2025-05-08T00:40:57.642241469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 8 00:40:57.643199 containerd[1558]: time="2025-05-08T00:40:57.643167733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 00:40:57.653857 containerd[1558]: time="2025-05-08T00:40:57.653639010Z" level=info msg="CreateContainer within sandbox \"bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 00:40:57.670278 containerd[1558]: time="2025-05-08T00:40:57.670233108Z" level=info msg="CreateContainer within sandbox \"bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a1c48f62c9349298e1b217086b5ed438de23f21be63b0bebe7cbb0430e0c4353\"" May 8 00:40:57.670753 containerd[1558]: time="2025-05-08T00:40:57.670709339Z" level=info msg="StartContainer for \"a1c48f62c9349298e1b217086b5ed438de23f21be63b0bebe7cbb0430e0c4353\"" May 8 00:40:57.744150 containerd[1558]: time="2025-05-08T00:40:57.744107554Z" level=info msg="StartContainer for \"a1c48f62c9349298e1b217086b5ed438de23f21be63b0bebe7cbb0430e0c4353\" returns successfully" May 8 00:40:58.455106 kubelet[2735]: I0508 00:40:58.455042 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-579fd86b5c-djj5f" podStartSLOduration=30.85171119 podStartE2EDuration="37.455024211s" podCreationTimestamp="2025-05-08 00:40:21 +0000 UTC" firstStartedPulling="2025-05-08 00:40:51.039665263 +0000 UTC m=+50.905347261" lastFinishedPulling="2025-05-08 00:40:57.642978294 +0000 UTC m=+57.508660282" observedRunningTime="2025-05-08 00:40:58.413080414 +0000 UTC m=+58.278762412" watchObservedRunningTime="2025-05-08 00:40:58.455024211 +0000 UTC m=+58.320706209" May 8 00:40:59.096705 containerd[1558]: time="2025-05-08T00:40:59.096640409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:59.097645 containerd[1558]: time="2025-05-08T00:40:59.097596048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 8 00:40:59.098853 containerd[1558]: time="2025-05-08T00:40:59.098810716Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:59.101834 containerd[1558]: time="2025-05-08T00:40:59.101778351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:40:59.102439 containerd[1558]: time="2025-05-08T00:40:59.102407172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.459191207s" May 8 00:40:59.102481 containerd[1558]: time="2025-05-08T00:40:59.102438380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 8 00:40:59.104791 containerd[1558]: time="2025-05-08T00:40:59.104753692Z" level=info msg="CreateContainer within sandbox \"75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 00:40:59.119850 containerd[1558]: time="2025-05-08T00:40:59.119794603Z" level=info msg="CreateContainer within sandbox \"75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"09b07525874e4f24e17ee96185c03d5dde2c6ab8baf7ca7d7d923bef6a4293e3\"" May 8 00:40:59.120392 containerd[1558]: time="2025-05-08T00:40:59.120346598Z" level=info msg="StartContainer for \"09b07525874e4f24e17ee96185c03d5dde2c6ab8baf7ca7d7d923bef6a4293e3\"" May 8 00:40:59.221622 containerd[1558]: time="2025-05-08T00:40:59.221575061Z" level=info msg="StartContainer for \"09b07525874e4f24e17ee96185c03d5dde2c6ab8baf7ca7d7d923bef6a4293e3\" returns successfully" May 8 00:40:59.290116 kubelet[2735]: I0508 00:40:59.290067 2735 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 00:40:59.290116 kubelet[2735]: I0508 00:40:59.290099 2735 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 00:40:59.410576 kubelet[2735]: I0508 00:40:59.410396 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-n25d4" podStartSLOduration=28.524369359 podStartE2EDuration="38.41037866s" podCreationTimestamp="2025-05-08 00:40:21 +0000 UTC" firstStartedPulling="2025-05-08 00:40:49.217234033 +0000 UTC m=+49.082916031" lastFinishedPulling="2025-05-08 00:40:59.103243334 +0000 UTC m=+58.968925332" observedRunningTime="2025-05-08 00:40:59.409729141 +0000 UTC m=+59.275411169" watchObservedRunningTime="2025-05-08 00:40:59.41037866 +0000 UTC m=+59.276060678" May 8 00:41:00.202907 containerd[1558]: time="2025-05-08T00:41:00.202858964Z" level=info msg="StopPodSandbox for \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\"" May 8 00:41:00.281337 containerd[1558]: 2025-05-08 00:41:00.239 [WARNING][5438] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0", GenerateName:"calico-apiserver-56d545bd9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"341e9812-ca08-4b8f-b246-09289190e736", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d545bd9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388", Pod:"calico-apiserver-56d545bd9b-bs27k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ded22b19e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:00.281337 containerd[1558]: 2025-05-08 00:41:00.239 [INFO][5438] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" May 8 00:41:00.281337 containerd[1558]: 2025-05-08 00:41:00.239 [INFO][5438] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" iface="eth0" netns="" May 8 00:41:00.281337 containerd[1558]: 2025-05-08 00:41:00.239 [INFO][5438] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" May 8 00:41:00.281337 containerd[1558]: 2025-05-08 00:41:00.239 [INFO][5438] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" May 8 00:41:00.281337 containerd[1558]: 2025-05-08 00:41:00.268 [INFO][5449] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" HandleID="k8s-pod-network.8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" Workload="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:41:00.281337 containerd[1558]: 2025-05-08 00:41:00.268 [INFO][5449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:00.281337 containerd[1558]: 2025-05-08 00:41:00.268 [INFO][5449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:00.281337 containerd[1558]: 2025-05-08 00:41:00.274 [WARNING][5449] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" HandleID="k8s-pod-network.8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" Workload="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:41:00.281337 containerd[1558]: 2025-05-08 00:41:00.274 [INFO][5449] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" HandleID="k8s-pod-network.8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" Workload="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:41:00.281337 containerd[1558]: 2025-05-08 00:41:00.275 [INFO][5449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:00.281337 containerd[1558]: 2025-05-08 00:41:00.278 [INFO][5438] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" May 8 00:41:00.282015 containerd[1558]: time="2025-05-08T00:41:00.281380346Z" level=info msg="TearDown network for sandbox \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\" successfully" May 8 00:41:00.282015 containerd[1558]: time="2025-05-08T00:41:00.281406405Z" level=info msg="StopPodSandbox for \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\" returns successfully" May 8 00:41:00.288905 containerd[1558]: time="2025-05-08T00:41:00.288859577Z" level=info msg="RemovePodSandbox for \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\"" May 8 00:41:00.291367 containerd[1558]: time="2025-05-08T00:41:00.291330101Z" level=info msg="Forcibly stopping sandbox \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\"" May 8 00:41:00.356396 containerd[1558]: 2025-05-08 00:41:00.325 [WARNING][5471] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0", GenerateName:"calico-apiserver-56d545bd9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"341e9812-ca08-4b8f-b246-09289190e736", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d545bd9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96d3e9f0c1e1f4298915c6b85fb6bcaf392a12c4503cd501ade24e8010588388", Pod:"calico-apiserver-56d545bd9b-bs27k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ded22b19e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:00.356396 containerd[1558]: 2025-05-08 00:41:00.325 [INFO][5471] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" May 8 00:41:00.356396 containerd[1558]: 2025-05-08 00:41:00.325 [INFO][5471] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" iface="eth0" netns="" May 8 00:41:00.356396 containerd[1558]: 2025-05-08 00:41:00.325 [INFO][5471] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" May 8 00:41:00.356396 containerd[1558]: 2025-05-08 00:41:00.325 [INFO][5471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" May 8 00:41:00.356396 containerd[1558]: 2025-05-08 00:41:00.345 [INFO][5479] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" HandleID="k8s-pod-network.8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" Workload="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:41:00.356396 containerd[1558]: 2025-05-08 00:41:00.346 [INFO][5479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:00.356396 containerd[1558]: 2025-05-08 00:41:00.346 [INFO][5479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:00.356396 containerd[1558]: 2025-05-08 00:41:00.350 [WARNING][5479] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" HandleID="k8s-pod-network.8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" Workload="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:41:00.356396 containerd[1558]: 2025-05-08 00:41:00.350 [INFO][5479] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" HandleID="k8s-pod-network.8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" Workload="localhost-k8s-calico--apiserver--56d545bd9b--bs27k-eth0" May 8 00:41:00.356396 containerd[1558]: 2025-05-08 00:41:00.351 [INFO][5479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:00.356396 containerd[1558]: 2025-05-08 00:41:00.354 [INFO][5471] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67" May 8 00:41:00.356867 containerd[1558]: time="2025-05-08T00:41:00.356437323Z" level=info msg="TearDown network for sandbox \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\" successfully" May 8 00:41:00.541197 systemd[1]: Started sshd@15-10.0.0.76:22-10.0.0.1:51322.service - OpenSSH per-connection server daemon (10.0.0.1:51322). May 8 00:41:00.665343 containerd[1558]: time="2025-05-08T00:41:00.665274020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:00.665343 containerd[1558]: time="2025-05-08T00:41:00.665352879Z" level=info msg="RemovePodSandbox \"8ec6badfa96d260bd8ab020b402f67f800ee2c4e3d5b3f17678595d08f947f67\" returns successfully" May 8 00:41:00.666036 containerd[1558]: time="2025-05-08T00:41:00.665987610Z" level=info msg="StopPodSandbox for \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\"" May 8 00:41:00.712638 sshd[5488]: Accepted publickey for core from 10.0.0.1 port 51322 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:41:00.714995 sshd[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:00.723928 systemd-logind[1534]: New session 16 of user core. May 8 00:41:00.731233 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:41:00.762246 containerd[1558]: 2025-05-08 00:41:00.715 [WARNING][5504] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0", GenerateName:"calico-kube-controllers-579fd86b5c-", Namespace:"calico-system", SelfLink:"", UID:"b19fe2b2-8765-481d-bbeb-26dd8f95c7c3", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"579fd86b5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564", Pod:"calico-kube-controllers-579fd86b5c-djj5f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d10b4cb8c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:00.762246 containerd[1558]: 2025-05-08 00:41:00.716 [INFO][5504] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" May 8 00:41:00.762246 containerd[1558]: 2025-05-08 00:41:00.716 [INFO][5504] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" iface="eth0" netns="" May 8 00:41:00.762246 containerd[1558]: 2025-05-08 00:41:00.716 [INFO][5504] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" May 8 00:41:00.762246 containerd[1558]: 2025-05-08 00:41:00.716 [INFO][5504] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" May 8 00:41:00.762246 containerd[1558]: 2025-05-08 00:41:00.747 [INFO][5513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" HandleID="k8s-pod-network.b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" Workload="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:41:00.762246 containerd[1558]: 2025-05-08 00:41:00.747 [INFO][5513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:00.762246 containerd[1558]: 2025-05-08 00:41:00.747 [INFO][5513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:00.762246 containerd[1558]: 2025-05-08 00:41:00.754 [WARNING][5513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" HandleID="k8s-pod-network.b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" Workload="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:41:00.762246 containerd[1558]: 2025-05-08 00:41:00.754 [INFO][5513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" HandleID="k8s-pod-network.b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" Workload="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:41:00.762246 containerd[1558]: 2025-05-08 00:41:00.755 [INFO][5513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:00.762246 containerd[1558]: 2025-05-08 00:41:00.759 [INFO][5504] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" May 8 00:41:00.762709 containerd[1558]: time="2025-05-08T00:41:00.762306577Z" level=info msg="TearDown network for sandbox \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\" successfully" May 8 00:41:00.762709 containerd[1558]: time="2025-05-08T00:41:00.762341063Z" level=info msg="StopPodSandbox for \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\" returns successfully" May 8 00:41:00.762871 containerd[1558]: time="2025-05-08T00:41:00.762840738Z" level=info msg="RemovePodSandbox for \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\"" May 8 00:41:00.762908 containerd[1558]: time="2025-05-08T00:41:00.762879071Z" level=info msg="Forcibly stopping sandbox \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\"" May 8 00:41:00.874067 containerd[1558]: 2025-05-08 00:41:00.822 [WARNING][5538] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0", GenerateName:"calico-kube-controllers-579fd86b5c-", Namespace:"calico-system", SelfLink:"", UID:"b19fe2b2-8765-481d-bbeb-26dd8f95c7c3", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"579fd86b5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd6bde2476d13b80b28349c663b8fd02009881ddbc751070a73fd0040ea60564", Pod:"calico-kube-controllers-579fd86b5c-djj5f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d10b4cb8c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:00.874067 containerd[1558]: 2025-05-08 00:41:00.823 [INFO][5538] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" May 8 00:41:00.874067 containerd[1558]: 2025-05-08 00:41:00.823 [INFO][5538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" iface="eth0" netns="" May 8 00:41:00.874067 containerd[1558]: 2025-05-08 00:41:00.823 [INFO][5538] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" May 8 00:41:00.874067 containerd[1558]: 2025-05-08 00:41:00.823 [INFO][5538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" May 8 00:41:00.874067 containerd[1558]: 2025-05-08 00:41:00.858 [INFO][5552] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" HandleID="k8s-pod-network.b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" Workload="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:41:00.874067 containerd[1558]: 2025-05-08 00:41:00.858 [INFO][5552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:00.874067 containerd[1558]: 2025-05-08 00:41:00.859 [INFO][5552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:00.874067 containerd[1558]: 2025-05-08 00:41:00.863 [WARNING][5552] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" HandleID="k8s-pod-network.b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" Workload="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:41:00.874067 containerd[1558]: 2025-05-08 00:41:00.863 [INFO][5552] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" HandleID="k8s-pod-network.b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" Workload="localhost-k8s-calico--kube--controllers--579fd86b5c--djj5f-eth0" May 8 00:41:00.874067 containerd[1558]: 2025-05-08 00:41:00.867 [INFO][5552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:00.874067 containerd[1558]: 2025-05-08 00:41:00.870 [INFO][5538] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e" May 8 00:41:00.874519 containerd[1558]: time="2025-05-08T00:41:00.874142545Z" level=info msg="TearDown network for sandbox \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\" successfully" May 8 00:41:00.879369 containerd[1558]: time="2025-05-08T00:41:00.879325652Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:00.879523 containerd[1558]: time="2025-05-08T00:41:00.879475927Z" level=info msg="RemovePodSandbox \"b6921fae4b61d6c2187b5d8e0d354a10e3a999928c9eb266af10e4e8b643cf7e\" returns successfully" May 8 00:41:00.880056 containerd[1558]: time="2025-05-08T00:41:00.880036086Z" level=info msg="StopPodSandbox for \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\"" May 8 00:41:00.919197 sshd[5488]: pam_unix(sshd:session): session closed for user core May 8 00:41:00.925469 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. May 8 00:41:00.927748 systemd[1]: sshd@15-10.0.0.76:22-10.0.0.1:51322.service: Deactivated successfully. May 8 00:41:00.930894 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:41:00.934506 systemd-logind[1534]: Removed session 16. May 8 00:41:00.963858 containerd[1558]: 2025-05-08 00:41:00.930 [WARNING][5576] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n25d4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"401667bd-ccb8-4edb-be5e-e0e65fa30964", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7", Pod:"csi-node-driver-n25d4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c453921ee6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:00.963858 containerd[1558]: 2025-05-08 00:41:00.930 [INFO][5576] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" May 8 00:41:00.963858 containerd[1558]: 2025-05-08 00:41:00.930 [INFO][5576] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" iface="eth0" netns="" May 8 00:41:00.963858 containerd[1558]: 2025-05-08 00:41:00.930 [INFO][5576] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" May 8 00:41:00.963858 containerd[1558]: 2025-05-08 00:41:00.930 [INFO][5576] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" May 8 00:41:00.963858 containerd[1558]: 2025-05-08 00:41:00.952 [INFO][5588] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" HandleID="k8s-pod-network.c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" Workload="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:41:00.963858 containerd[1558]: 2025-05-08 00:41:00.952 [INFO][5588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:00.963858 containerd[1558]: 2025-05-08 00:41:00.952 [INFO][5588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:00.963858 containerd[1558]: 2025-05-08 00:41:00.957 [WARNING][5588] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" HandleID="k8s-pod-network.c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" Workload="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:41:00.963858 containerd[1558]: 2025-05-08 00:41:00.957 [INFO][5588] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" HandleID="k8s-pod-network.c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" Workload="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:41:00.963858 containerd[1558]: 2025-05-08 00:41:00.958 [INFO][5588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:00.963858 containerd[1558]: 2025-05-08 00:41:00.960 [INFO][5576] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" May 8 00:41:00.964342 containerd[1558]: time="2025-05-08T00:41:00.963929142Z" level=info msg="TearDown network for sandbox \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\" successfully" May 8 00:41:00.964342 containerd[1558]: time="2025-05-08T00:41:00.963965281Z" level=info msg="StopPodSandbox for \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\" returns successfully" May 8 00:41:00.964564 containerd[1558]: time="2025-05-08T00:41:00.964517987Z" level=info msg="RemovePodSandbox for \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\"" May 8 00:41:00.964564 containerd[1558]: time="2025-05-08T00:41:00.964563974Z" level=info msg="Forcibly stopping sandbox \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\"" May 8 00:41:01.036234 containerd[1558]: 2025-05-08 00:41:01.000 [WARNING][5611] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n25d4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"401667bd-ccb8-4edb-be5e-e0e65fa30964", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75981d12055d88bcfe390e3191cd7371654453293ecd1829cb51ad2b3773a7e7", Pod:"csi-node-driver-n25d4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8c453921ee6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:01.036234 containerd[1558]: 2025-05-08 00:41:01.000 [INFO][5611] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" May 8 00:41:01.036234 containerd[1558]: 2025-05-08 00:41:01.000 [INFO][5611] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" iface="eth0" netns="" May 8 00:41:01.036234 containerd[1558]: 2025-05-08 00:41:01.000 [INFO][5611] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" May 8 00:41:01.036234 containerd[1558]: 2025-05-08 00:41:01.000 [INFO][5611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" May 8 00:41:01.036234 containerd[1558]: 2025-05-08 00:41:01.024 [INFO][5619] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" HandleID="k8s-pod-network.c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" Workload="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:41:01.036234 containerd[1558]: 2025-05-08 00:41:01.024 [INFO][5619] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:01.036234 containerd[1558]: 2025-05-08 00:41:01.024 [INFO][5619] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:01.036234 containerd[1558]: 2025-05-08 00:41:01.029 [WARNING][5619] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" HandleID="k8s-pod-network.c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" Workload="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:41:01.036234 containerd[1558]: 2025-05-08 00:41:01.029 [INFO][5619] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" HandleID="k8s-pod-network.c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" Workload="localhost-k8s-csi--node--driver--n25d4-eth0" May 8 00:41:01.036234 containerd[1558]: 2025-05-08 00:41:01.030 [INFO][5619] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:01.036234 containerd[1558]: 2025-05-08 00:41:01.033 [INFO][5611] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29" May 8 00:41:01.036755 containerd[1558]: time="2025-05-08T00:41:01.036275706Z" level=info msg="TearDown network for sandbox \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\" successfully" May 8 00:41:01.040930 containerd[1558]: time="2025-05-08T00:41:01.040872674Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:01.040930 containerd[1558]: time="2025-05-08T00:41:01.040931505Z" level=info msg="RemovePodSandbox \"c6cc7d2016316c97ad165a455ea29274388000545cdc06a25b41efcf7b7b3f29\" returns successfully" May 8 00:41:01.044686 containerd[1558]: time="2025-05-08T00:41:01.041761946Z" level=info msg="StopPodSandbox for \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\"" May 8 00:41:01.144369 containerd[1558]: 2025-05-08 00:41:01.096 [WARNING][5642] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"19690749-3374-43c7-ba3a-50053c69dd38", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4", Pod:"coredns-7db6d8ff4d-vtqkm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0daa0386fcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:01.144369 containerd[1558]: 2025-05-08 00:41:01.096 [INFO][5642] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" May 8 00:41:01.144369 containerd[1558]: 2025-05-08 00:41:01.096 [INFO][5642] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" iface="eth0" netns="" May 8 00:41:01.144369 containerd[1558]: 2025-05-08 00:41:01.096 [INFO][5642] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" May 8 00:41:01.144369 containerd[1558]: 2025-05-08 00:41:01.096 [INFO][5642] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" May 8 00:41:01.144369 containerd[1558]: 2025-05-08 00:41:01.132 [INFO][5650] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" HandleID="k8s-pod-network.ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" Workload="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:41:01.144369 containerd[1558]: 2025-05-08 00:41:01.132 [INFO][5650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:01.144369 containerd[1558]: 2025-05-08 00:41:01.132 [INFO][5650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:01.144369 containerd[1558]: 2025-05-08 00:41:01.137 [WARNING][5650] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" HandleID="k8s-pod-network.ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" Workload="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:41:01.144369 containerd[1558]: 2025-05-08 00:41:01.137 [INFO][5650] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" HandleID="k8s-pod-network.ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" Workload="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:41:01.144369 containerd[1558]: 2025-05-08 00:41:01.138 [INFO][5650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:01.144369 containerd[1558]: 2025-05-08 00:41:01.141 [INFO][5642] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" May 8 00:41:01.144369 containerd[1558]: time="2025-05-08T00:41:01.144328229Z" level=info msg="TearDown network for sandbox \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\" successfully" May 8 00:41:01.144369 containerd[1558]: time="2025-05-08T00:41:01.144355130Z" level=info msg="StopPodSandbox for \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\" returns successfully" May 8 00:41:01.144848 containerd[1558]: time="2025-05-08T00:41:01.144803709Z" level=info msg="RemovePodSandbox for \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\"" May 8 00:41:01.144875 containerd[1558]: time="2025-05-08T00:41:01.144827333Z" level=info msg="Forcibly stopping sandbox \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\"" May 8 00:41:01.216009 containerd[1558]: 2025-05-08 00:41:01.183 [WARNING][5672] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"19690749-3374-43c7-ba3a-50053c69dd38", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f8f567df9df4a7b068dbed0a88d824636f1b68517d9e31c371cc961d0eb8de4", Pod:"coredns-7db6d8ff4d-vtqkm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0daa0386fcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:01.216009 containerd[1558]: 2025-05-08 00:41:01.183 [INFO][5672] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" May 8 00:41:01.216009 containerd[1558]: 2025-05-08 00:41:01.183 [INFO][5672] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" iface="eth0" netns="" May 8 00:41:01.216009 containerd[1558]: 2025-05-08 00:41:01.183 [INFO][5672] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" May 8 00:41:01.216009 containerd[1558]: 2025-05-08 00:41:01.183 [INFO][5672] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" May 8 00:41:01.216009 containerd[1558]: 2025-05-08 00:41:01.203 [INFO][5680] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" HandleID="k8s-pod-network.ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" Workload="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:41:01.216009 containerd[1558]: 2025-05-08 00:41:01.203 [INFO][5680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:01.216009 containerd[1558]: 2025-05-08 00:41:01.203 [INFO][5680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:01.216009 containerd[1558]: 2025-05-08 00:41:01.208 [WARNING][5680] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" HandleID="k8s-pod-network.ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" Workload="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:41:01.216009 containerd[1558]: 2025-05-08 00:41:01.208 [INFO][5680] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" HandleID="k8s-pod-network.ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" Workload="localhost-k8s-coredns--7db6d8ff4d--vtqkm-eth0" May 8 00:41:01.216009 containerd[1558]: 2025-05-08 00:41:01.210 [INFO][5680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:01.216009 containerd[1558]: 2025-05-08 00:41:01.212 [INFO][5672] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd" May 8 00:41:01.216717 containerd[1558]: time="2025-05-08T00:41:01.216052059Z" level=info msg="TearDown network for sandbox \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\" successfully" May 8 00:41:01.220649 containerd[1558]: time="2025-05-08T00:41:01.220605464Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:01.220721 containerd[1558]: time="2025-05-08T00:41:01.220687739Z" level=info msg="RemovePodSandbox \"ead98e6b6ab43c87076ce18b207121794da2ee47ef64efc48f045267c25d06dd\" returns successfully" May 8 00:41:01.221204 containerd[1558]: time="2025-05-08T00:41:01.221184650Z" level=info msg="StopPodSandbox for \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\"" May 8 00:41:01.292058 containerd[1558]: 2025-05-08 00:41:01.260 [WARNING][5705] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"21aa72be-6d70-4094-91d7-d01c91cb809c", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f", Pod:"coredns-7db6d8ff4d-8dhxq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3211da4053c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:01.292058 containerd[1558]: 2025-05-08 00:41:01.260 [INFO][5705] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" May 8 00:41:01.292058 containerd[1558]: 2025-05-08 00:41:01.260 [INFO][5705] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" iface="eth0" netns="" May 8 00:41:01.292058 containerd[1558]: 2025-05-08 00:41:01.260 [INFO][5705] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" May 8 00:41:01.292058 containerd[1558]: 2025-05-08 00:41:01.260 [INFO][5705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" May 8 00:41:01.292058 containerd[1558]: 2025-05-08 00:41:01.280 [INFO][5713] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" HandleID="k8s-pod-network.2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" Workload="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:41:01.292058 containerd[1558]: 2025-05-08 00:41:01.280 [INFO][5713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:01.292058 containerd[1558]: 2025-05-08 00:41:01.280 [INFO][5713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:01.292058 containerd[1558]: 2025-05-08 00:41:01.285 [WARNING][5713] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" HandleID="k8s-pod-network.2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" Workload="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:41:01.292058 containerd[1558]: 2025-05-08 00:41:01.285 [INFO][5713] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" HandleID="k8s-pod-network.2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" Workload="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:41:01.292058 containerd[1558]: 2025-05-08 00:41:01.286 [INFO][5713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:01.292058 containerd[1558]: 2025-05-08 00:41:01.289 [INFO][5705] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" May 8 00:41:01.292462 containerd[1558]: time="2025-05-08T00:41:01.292138463Z" level=info msg="TearDown network for sandbox \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\" successfully" May 8 00:41:01.292462 containerd[1558]: time="2025-05-08T00:41:01.292163480Z" level=info msg="StopPodSandbox for \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\" returns successfully" May 8 00:41:01.292689 containerd[1558]: time="2025-05-08T00:41:01.292657214Z" level=info msg="RemovePodSandbox for \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\"" May 8 00:41:01.292689 containerd[1558]: time="2025-05-08T00:41:01.292683645Z" level=info msg="Forcibly stopping sandbox \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\"" May 8 00:41:01.364535 containerd[1558]: 2025-05-08 00:41:01.328 [WARNING][5735] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"21aa72be-6d70-4094-91d7-d01c91cb809c", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4059db4aecbcbbea6d2d97e0c3c3c041d5e67063e09a5fc5d87bb3812e3e10f", Pod:"coredns-7db6d8ff4d-8dhxq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3211da4053c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:01.364535 containerd[1558]: 2025-05-08 00:41:01.328 [INFO][5735] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" May 8 00:41:01.364535 containerd[1558]: 2025-05-08 00:41:01.328 [INFO][5735] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" iface="eth0" netns="" May 8 00:41:01.364535 containerd[1558]: 2025-05-08 00:41:01.328 [INFO][5735] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" May 8 00:41:01.364535 containerd[1558]: 2025-05-08 00:41:01.328 [INFO][5735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" May 8 00:41:01.364535 containerd[1558]: 2025-05-08 00:41:01.352 [INFO][5743] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" HandleID="k8s-pod-network.2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" Workload="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:41:01.364535 containerd[1558]: 2025-05-08 00:41:01.352 [INFO][5743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:01.364535 containerd[1558]: 2025-05-08 00:41:01.352 [INFO][5743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:01.364535 containerd[1558]: 2025-05-08 00:41:01.358 [WARNING][5743] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" HandleID="k8s-pod-network.2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" Workload="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:41:01.364535 containerd[1558]: 2025-05-08 00:41:01.358 [INFO][5743] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" HandleID="k8s-pod-network.2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" Workload="localhost-k8s-coredns--7db6d8ff4d--8dhxq-eth0" May 8 00:41:01.364535 containerd[1558]: 2025-05-08 00:41:01.359 [INFO][5743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:01.364535 containerd[1558]: 2025-05-08 00:41:01.362 [INFO][5735] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f" May 8 00:41:01.364993 containerd[1558]: time="2025-05-08T00:41:01.364567998Z" level=info msg="TearDown network for sandbox \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\" successfully" May 8 00:41:01.368465 containerd[1558]: time="2025-05-08T00:41:01.368438320Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:01.368521 containerd[1558]: time="2025-05-08T00:41:01.368483957Z" level=info msg="RemovePodSandbox \"2b19781eef51ae638294fda2c6ac822d8cb804214ca6e4cde4ad06832d97450f\" returns successfully" May 8 00:41:01.369009 containerd[1558]: time="2025-05-08T00:41:01.368987830Z" level=info msg="StopPodSandbox for \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\"" May 8 00:41:01.433382 containerd[1558]: 2025-05-08 00:41:01.401 [WARNING][5765] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0", GenerateName:"calico-apiserver-56d545bd9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0382afa2-c5cb-48de-9957-0becdf36fe1b", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d545bd9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf", Pod:"calico-apiserver-56d545bd9b-pw449", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie585983f6a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:01.433382 containerd[1558]: 2025-05-08 00:41:01.402 [INFO][5765] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" May 8 00:41:01.433382 containerd[1558]: 2025-05-08 00:41:01.402 [INFO][5765] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" iface="eth0" netns="" May 8 00:41:01.433382 containerd[1558]: 2025-05-08 00:41:01.402 [INFO][5765] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" May 8 00:41:01.433382 containerd[1558]: 2025-05-08 00:41:01.402 [INFO][5765] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" May 8 00:41:01.433382 containerd[1558]: 2025-05-08 00:41:01.422 [INFO][5773] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" HandleID="k8s-pod-network.488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" Workload="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:41:01.433382 containerd[1558]: 2025-05-08 00:41:01.422 [INFO][5773] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:01.433382 containerd[1558]: 2025-05-08 00:41:01.422 [INFO][5773] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:01.433382 containerd[1558]: 2025-05-08 00:41:01.427 [WARNING][5773] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" HandleID="k8s-pod-network.488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" Workload="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:41:01.433382 containerd[1558]: 2025-05-08 00:41:01.427 [INFO][5773] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" HandleID="k8s-pod-network.488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" Workload="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:41:01.433382 containerd[1558]: 2025-05-08 00:41:01.428 [INFO][5773] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:01.433382 containerd[1558]: 2025-05-08 00:41:01.430 [INFO][5765] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" May 8 00:41:01.433382 containerd[1558]: time="2025-05-08T00:41:01.433333420Z" level=info msg="TearDown network for sandbox \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\" successfully" May 8 00:41:01.437541 containerd[1558]: time="2025-05-08T00:41:01.433358097Z" level=info msg="StopPodSandbox for \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\" returns successfully" May 8 00:41:01.437941 containerd[1558]: time="2025-05-08T00:41:01.437914297Z" level=info msg="RemovePodSandbox for \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\"" May 8 00:41:01.437999 containerd[1558]: time="2025-05-08T00:41:01.437946689Z" level=info msg="Forcibly stopping sandbox \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\"" May 8 00:41:01.501245 containerd[1558]: 2025-05-08 00:41:01.471 [WARNING][5796] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0", GenerateName:"calico-apiserver-56d545bd9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"0382afa2-c5cb-48de-9957-0becdf36fe1b", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 0, 40, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d545bd9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdb4250500869f93cde1a6d923d204d4df113d552edbbc60b25328432ee58ddf", Pod:"calico-apiserver-56d545bd9b-pw449", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie585983f6a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 00:41:01.501245 containerd[1558]: 2025-05-08 00:41:01.471 [INFO][5796] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" May 8 00:41:01.501245 containerd[1558]: 2025-05-08 00:41:01.471 [INFO][5796] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" iface="eth0" netns="" May 8 00:41:01.501245 containerd[1558]: 2025-05-08 00:41:01.471 [INFO][5796] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" May 8 00:41:01.501245 containerd[1558]: 2025-05-08 00:41:01.471 [INFO][5796] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" May 8 00:41:01.501245 containerd[1558]: 2025-05-08 00:41:01.489 [INFO][5804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" HandleID="k8s-pod-network.488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" Workload="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:41:01.501245 containerd[1558]: 2025-05-08 00:41:01.489 [INFO][5804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 00:41:01.501245 containerd[1558]: 2025-05-08 00:41:01.489 [INFO][5804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 00:41:01.501245 containerd[1558]: 2025-05-08 00:41:01.495 [WARNING][5804] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" HandleID="k8s-pod-network.488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" Workload="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:41:01.501245 containerd[1558]: 2025-05-08 00:41:01.495 [INFO][5804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" HandleID="k8s-pod-network.488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" Workload="localhost-k8s-calico--apiserver--56d545bd9b--pw449-eth0" May 8 00:41:01.501245 containerd[1558]: 2025-05-08 00:41:01.496 [INFO][5804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 00:41:01.501245 containerd[1558]: 2025-05-08 00:41:01.498 [INFO][5796] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f" May 8 00:41:01.501720 containerd[1558]: time="2025-05-08T00:41:01.501285435Z" level=info msg="TearDown network for sandbox \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\" successfully" May 8 00:41:01.505453 containerd[1558]: time="2025-05-08T00:41:01.505413635Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:41:01.505507 containerd[1558]: time="2025-05-08T00:41:01.505479429Z" level=info msg="RemovePodSandbox \"488830d2fa408d4ed9617969e45febb93e6ac64e7027f02910c5031ada81264f\" returns successfully" May 8 00:41:05.932231 systemd[1]: Started sshd@16-10.0.0.76:22-10.0.0.1:51338.service - OpenSSH per-connection server daemon (10.0.0.1:51338). May 8 00:41:05.964814 sshd[5838]: Accepted publickey for core from 10.0.0.1 port 51338 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:41:05.966478 sshd[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:05.970384 systemd-logind[1534]: New session 17 of user core. May 8 00:41:05.983270 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:41:06.088556 sshd[5838]: pam_unix(sshd:session): session closed for user core May 8 00:41:06.092332 systemd[1]: sshd@16-10.0.0.76:22-10.0.0.1:51338.service: Deactivated successfully. May 8 00:41:06.094691 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. May 8 00:41:06.094855 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:41:06.095780 systemd-logind[1534]: Removed session 17. May 8 00:41:11.101361 systemd[1]: Started sshd@17-10.0.0.76:22-10.0.0.1:40200.service - OpenSSH per-connection server daemon (10.0.0.1:40200). May 8 00:41:11.136431 sshd[5853]: Accepted publickey for core from 10.0.0.1 port 40200 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:41:11.138237 sshd[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:11.142535 systemd-logind[1534]: New session 18 of user core. May 8 00:41:11.154225 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:41:11.272127 sshd[5853]: pam_unix(sshd:session): session closed for user core May 8 00:41:11.280408 systemd[1]: Started sshd@18-10.0.0.76:22-10.0.0.1:40214.service - OpenSSH per-connection server daemon (10.0.0.1:40214). May 8 00:41:11.281115 systemd[1]: sshd@17-10.0.0.76:22-10.0.0.1:40200.service: Deactivated successfully. May 8 00:41:11.286237 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:41:11.287880 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. May 8 00:41:11.289124 systemd-logind[1534]: Removed session 18. May 8 00:41:11.317917 sshd[5867]: Accepted publickey for core from 10.0.0.1 port 40214 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:41:11.319892 sshd[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:11.324597 systemd-logind[1534]: New session 19 of user core. May 8 00:41:11.334428 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:41:11.614820 sshd[5867]: pam_unix(sshd:session): session closed for user core May 8 00:41:11.624314 systemd[1]: Started sshd@19-10.0.0.76:22-10.0.0.1:40230.service - OpenSSH per-connection server daemon (10.0.0.1:40230). May 8 00:41:11.625016 systemd[1]: sshd@18-10.0.0.76:22-10.0.0.1:40214.service: Deactivated successfully. May 8 00:41:11.629666 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:41:11.630841 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. May 8 00:41:11.632055 systemd-logind[1534]: Removed session 19. May 8 00:41:11.659589 sshd[5880]: Accepted publickey for core from 10.0.0.1 port 40230 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:41:11.661585 sshd[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:11.666049 systemd-logind[1534]: New session 20 of user core. May 8 00:41:11.676377 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:41:12.214015 kubelet[2735]: E0508 00:41:12.210473 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:13.174878 sshd[5880]: pam_unix(sshd:session): session closed for user core May 8 00:41:13.184254 systemd[1]: Started sshd@20-10.0.0.76:22-10.0.0.1:40232.service - OpenSSH per-connection server daemon (10.0.0.1:40232). May 8 00:41:13.184760 systemd[1]: sshd@19-10.0.0.76:22-10.0.0.1:40230.service: Deactivated successfully. May 8 00:41:13.192634 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. May 8 00:41:13.194898 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:41:13.202640 systemd-logind[1534]: Removed session 20. May 8 00:41:13.228131 sshd[5904]: Accepted publickey for core from 10.0.0.1 port 40232 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:41:13.231594 sshd[5904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:13.236989 systemd-logind[1534]: New session 21 of user core. May 8 00:41:13.244488 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:41:13.335421 kubelet[2735]: E0508 00:41:13.335381 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:13.464740 sshd[5904]: pam_unix(sshd:session): session closed for user core May 8 00:41:13.476300 systemd[1]: Started sshd@21-10.0.0.76:22-10.0.0.1:40234.service - OpenSSH per-connection server daemon (10.0.0.1:40234). May 8 00:41:13.476912 systemd[1]: sshd@20-10.0.0.76:22-10.0.0.1:40232.service: Deactivated successfully. May 8 00:41:13.479803 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:41:13.481801 systemd-logind[1534]: Session 21 logged out. Waiting for processes to exit. May 8 00:41:13.483662 systemd-logind[1534]: Removed session 21. May 8 00:41:13.510968 sshd[5942]: Accepted publickey for core from 10.0.0.1 port 40234 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:41:13.512671 sshd[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:13.517513 systemd-logind[1534]: New session 22 of user core. May 8 00:41:13.523091 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:41:13.641072 sshd[5942]: pam_unix(sshd:session): session closed for user core May 8 00:41:13.645226 systemd[1]: sshd@21-10.0.0.76:22-10.0.0.1:40234.service: Deactivated successfully. May 8 00:41:13.647721 systemd-logind[1534]: Session 22 logged out. Waiting for processes to exit. May 8 00:41:13.647788 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:41:13.649047 systemd-logind[1534]: Removed session 22. May 8 00:41:18.657316 systemd[1]: Started sshd@22-10.0.0.76:22-10.0.0.1:50786.service - OpenSSH per-connection server daemon (10.0.0.1:50786). May 8 00:41:18.690404 sshd[5963]: Accepted publickey for core from 10.0.0.1 port 50786 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:41:18.692633 sshd[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:18.697735 systemd-logind[1534]: New session 23 of user core. May 8 00:41:18.703419 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:41:18.825815 sshd[5963]: pam_unix(sshd:session): session closed for user core May 8 00:41:18.830908 systemd[1]: sshd@22-10.0.0.76:22-10.0.0.1:50786.service: Deactivated successfully. May 8 00:41:18.835855 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:41:18.837867 systemd-logind[1534]: Session 23 logged out. Waiting for processes to exit. May 8 00:41:18.839308 systemd-logind[1534]: Removed session 23. May 8 00:41:23.840501 systemd[1]: Started sshd@23-10.0.0.76:22-10.0.0.1:50796.service - OpenSSH per-connection server daemon (10.0.0.1:50796). May 8 00:41:23.876879 sshd[5981]: Accepted publickey for core from 10.0.0.1 port 50796 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:41:23.878690 sshd[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:23.884334 systemd-logind[1534]: New session 24 of user core. May 8 00:41:23.895340 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:41:24.011885 sshd[5981]: pam_unix(sshd:session): session closed for user core May 8 00:41:24.015965 systemd[1]: sshd@23-10.0.0.76:22-10.0.0.1:50796.service: Deactivated successfully. May 8 00:41:24.018690 systemd-logind[1534]: Session 24 logged out. Waiting for processes to exit. May 8 00:41:24.018755 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:41:24.020250 systemd-logind[1534]: Removed session 24. May 8 00:41:28.209876 kubelet[2735]: E0508 00:41:28.209793 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:41:28.954104 kubelet[2735]: I0508 00:41:28.954050 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:41:29.030430 systemd[1]: Started sshd@24-10.0.0.76:22-10.0.0.1:53712.service - OpenSSH per-connection server daemon (10.0.0.1:53712). May 8 00:41:29.067440 sshd[6002]: Accepted publickey for core from 10.0.0.1 port 53712 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:41:29.069244 sshd[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:29.074209 systemd-logind[1534]: New session 25 of user core. May 8 00:41:29.080260 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:41:29.261115 sshd[6002]: pam_unix(sshd:session): session closed for user core May 8 00:41:29.265757 systemd[1]: sshd@24-10.0.0.76:22-10.0.0.1:53712.service: Deactivated successfully. May 8 00:41:29.268141 systemd-logind[1534]: Session 25 logged out. Waiting for processes to exit. May 8 00:41:29.268452 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:41:29.269703 systemd-logind[1534]: Removed session 25. May 8 00:41:34.271184 systemd[1]: Started sshd@25-10.0.0.76:22-10.0.0.1:53722.service - OpenSSH per-connection server daemon (10.0.0.1:53722). May 8 00:41:34.304777 sshd[6021]: Accepted publickey for core from 10.0.0.1 port 53722 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:41:34.306363 sshd[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:41:34.310523 systemd-logind[1534]: New session 26 of user core. May 8 00:41:34.317259 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 00:41:34.442347 sshd[6021]: pam_unix(sshd:session): session closed for user core May 8 00:41:34.447125 systemd[1]: sshd@25-10.0.0.76:22-10.0.0.1:53722.service: Deactivated successfully. May 8 00:41:34.449841 systemd-logind[1534]: Session 26 logged out. Waiting for processes to exit. May 8 00:41:34.449971 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:41:34.451267 systemd-logind[1534]: Removed session 26.