May 9 00:28:28.893284 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 8 22:52:37 -00 2025 May 9 00:28:28.893305 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:28:28.893317 kernel: BIOS-provided physical RAM map: May 9 00:28:28.893324 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 9 00:28:28.893330 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 9 00:28:28.893336 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 9 00:28:28.893343 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 9 00:28:28.893350 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 9 00:28:28.893356 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 9 00:28:28.893363 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 9 00:28:28.893372 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 9 00:28:28.893385 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 9 00:28:28.893395 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 9 00:28:28.893401 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 9 00:28:28.893411 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 9 00:28:28.893418 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 9 00:28:28.893428 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 9 00:28:28.893435 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 9 00:28:28.893442 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 9 00:28:28.893448 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 9 00:28:28.893455 kernel: NX (Execute Disable) protection: active May 9 00:28:28.893462 kernel: APIC: Static calls initialized May 9 00:28:28.893469 kernel: efi: EFI v2.7 by EDK II May 9 00:28:28.893476 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 9 00:28:28.893482 kernel: SMBIOS 2.8 present. May 9 00:28:28.893489 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 9 00:28:28.893496 kernel: Hypervisor detected: KVM May 9 00:28:28.893505 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 9 00:28:28.893512 kernel: kvm-clock: using sched offset of 5684329697 cycles May 9 00:28:28.893519 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 9 00:28:28.893527 kernel: tsc: Detected 2794.748 MHz processor May 9 00:28:28.893534 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 9 00:28:28.893541 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 9 00:28:28.893549 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 9 00:28:28.893556 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 9 00:28:28.893563 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 9 00:28:28.893597 kernel: Using GB pages for direct mapping May 9 00:28:28.893605 kernel: Secure boot disabled May 9 00:28:28.893612 kernel: ACPI: Early table checksum verification disabled May 9 00:28:28.893619 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 9 00:28:28.893630 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 9 00:28:28.893638 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:28:28.893645 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:28:28.893655 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 9 00:28:28.893662 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:28:28.893672 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:28:28.893679 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:28:28.893687 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:28:28.893694 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 9 00:28:28.893701 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 9 00:28:28.893711 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 9 00:28:28.893718 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 9 00:28:28.893726 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 9 00:28:28.893733 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 9 00:28:28.893740 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 9 00:28:28.893747 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 9 00:28:28.893755 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 9 00:28:28.893762 kernel: No NUMA configuration found May 9 00:28:28.893771 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 9 00:28:28.893781 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 9 00:28:28.893788 kernel: Zone ranges: May 9 00:28:28.893796 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 9 00:28:28.893803 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 9 00:28:28.893811 kernel: Normal empty May 9 00:28:28.893818 kernel: Movable zone start for each node May 9 00:28:28.893825 kernel: Early memory node ranges May 9 00:28:28.893832 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 9 00:28:28.893840 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 9 00:28:28.893847 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 9 00:28:28.893857 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 9 00:28:28.893864 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 9 00:28:28.893871 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 9 00:28:28.893880 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 9 00:28:28.893888 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:28:28.893895 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 9 00:28:28.893903 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 9 00:28:28.893910 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:28:28.893917 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 9 00:28:28.893927 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 9 00:28:28.893934 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 9 00:28:28.893942 kernel: ACPI: PM-Timer IO Port: 0x608 May 9 00:28:28.893950 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 9 00:28:28.893957 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 9 00:28:28.893964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 9 00:28:28.893972 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 9 00:28:28.893979 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 9 00:28:28.893986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 9 00:28:28.893996 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 9 00:28:28.894003 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 9 00:28:28.894011 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 9 00:28:28.894018 kernel: TSC deadline timer available May 9 00:28:28.894025 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 9 00:28:28.894033 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 9 00:28:28.894040 kernel: kvm-guest: KVM setup pv remote TLB flush May 9 00:28:28.894047 kernel: kvm-guest: setup PV sched yield May 9 00:28:28.894054 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 9 00:28:28.894064 kernel: Booting paravirtualized kernel on KVM May 9 00:28:28.894072 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 9 00:28:28.894079 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 9 00:28:28.894087 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 9 00:28:28.894094 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 9 00:28:28.894101 kernel: pcpu-alloc: [0] 0 1 2 3 May 9 00:28:28.894108 kernel: kvm-guest: PV spinlocks enabled May 9 00:28:28.894116 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 9 00:28:28.894124 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:28:28.894137 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:28:28.894145 kernel: random: crng init done May 9 00:28:28.894152 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 00:28:28.894159 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:28:28.894167 kernel: Fallback order for Node 0: 0 May 9 00:28:28.894174 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 9 00:28:28.894182 kernel: Policy zone: DMA32 May 9 00:28:28.894189 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:28:28.894199 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 166140K reserved, 0K cma-reserved) May 9 00:28:28.894207 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 00:28:28.894214 kernel: ftrace: allocating 37944 entries in 149 pages May 9 00:28:28.894222 kernel: ftrace: allocated 149 pages with 4 groups May 9 00:28:28.894229 kernel: Dynamic Preempt: voluntary May 9 00:28:28.894245 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:28:28.894255 kernel: rcu: RCU event tracing is enabled. May 9 00:28:28.894263 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 00:28:28.894271 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:28:28.894279 kernel: Rude variant of Tasks RCU enabled. May 9 00:28:28.894286 kernel: Tracing variant of Tasks RCU enabled. May 9 00:28:28.894294 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:28:28.894305 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 00:28:28.894312 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 9 00:28:28.894322 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:28:28.894330 kernel: Console: colour dummy device 80x25 May 9 00:28:28.894337 kernel: printk: console [ttyS0] enabled May 9 00:28:28.894348 kernel: ACPI: Core revision 20230628 May 9 00:28:28.894355 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 9 00:28:28.894363 kernel: APIC: Switch to symmetric I/O mode setup May 9 00:28:28.894371 kernel: x2apic enabled May 9 00:28:28.894385 kernel: APIC: Switched APIC routing to: physical x2apic May 9 00:28:28.894393 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 9 00:28:28.894401 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 9 00:28:28.894409 kernel: kvm-guest: setup PV IPIs May 9 00:28:28.894418 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 9 00:28:28.894433 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 9 00:28:28.894443 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 9 00:28:28.894451 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 9 00:28:28.894459 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 9 00:28:28.894466 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 9 00:28:28.894474 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 9 00:28:28.894482 kernel: Spectre V2 : Mitigation: Retpolines May 9 00:28:28.894490 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 9 00:28:28.894497 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 9 00:28:28.894508 kernel: RETBleed: Mitigation: untrained return thunk May 9 00:28:28.894516 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 9 00:28:28.894523 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 9 00:28:28.894531 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 9 00:28:28.894542 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 9 00:28:28.894550 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 9 00:28:28.894558 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 9 00:28:28.894566 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 9 00:28:28.894674 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 9 00:28:28.894682 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 9 00:28:28.894690 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 9 00:28:28.894698 kernel: Freeing SMP alternatives memory: 32K May 9 00:28:28.894705 kernel: pid_max: default: 32768 minimum: 301 May 9 00:28:28.894713 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:28:28.894721 kernel: landlock: Up and running. May 9 00:28:28.894728 kernel: SELinux: Initializing. May 9 00:28:28.894736 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:28:28.894747 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:28:28.894755 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 9 00:28:28.894763 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:28:28.894771 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:28:28.894779 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:28:28.894786 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 9 00:28:28.894794 kernel: ... version: 0 May 9 00:28:28.894801 kernel: ... bit width: 48 May 9 00:28:28.894809 kernel: ... generic registers: 6 May 9 00:28:28.894819 kernel: ... value mask: 0000ffffffffffff May 9 00:28:28.894827 kernel: ... max period: 00007fffffffffff May 9 00:28:28.894834 kernel: ... fixed-purpose events: 0 May 9 00:28:28.894842 kernel: ... event mask: 000000000000003f May 9 00:28:28.894849 kernel: signal: max sigframe size: 1776 May 9 00:28:28.894857 kernel: rcu: Hierarchical SRCU implementation. May 9 00:28:28.894865 kernel: rcu: Max phase no-delay instances is 400. May 9 00:28:28.894872 kernel: smp: Bringing up secondary CPUs ... May 9 00:28:28.894880 kernel: smpboot: x86: Booting SMP configuration: May 9 00:28:28.894890 kernel: .... node #0, CPUs: #1 #2 #3 May 9 00:28:28.894898 kernel: smp: Brought up 1 node, 4 CPUs May 9 00:28:28.894906 kernel: smpboot: Max logical packages: 1 May 9 00:28:28.894916 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 9 00:28:28.894924 kernel: devtmpfs: initialized May 9 00:28:28.894931 kernel: x86/mm: Memory block size: 128MB May 9 00:28:28.894939 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 9 00:28:28.894947 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 9 00:28:28.894955 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 9 00:28:28.894965 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 9 00:28:28.894973 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 9 00:28:28.894980 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:28:28.894988 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 00:28:28.894996 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:28:28.895003 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:28:28.895011 kernel: audit: initializing netlink subsys (disabled) May 9 00:28:28.895018 kernel: audit: type=2000 audit(1746750508.471:1): state=initialized audit_enabled=0 res=1 May 9 00:28:28.895026 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:28:28.895036 kernel: thermal_sys: Registered thermal governor 'user_space' May 9 00:28:28.895043 kernel: cpuidle: using governor menu May 9 00:28:28.895051 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:28:28.895059 kernel: dca service started, version 1.12.1 May 9 00:28:28.895067 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 9 00:28:28.895074 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 9 00:28:28.895082 kernel: PCI: Using configuration type 1 for base access May 9 00:28:28.895090 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 9 00:28:28.895097 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:28:28.895107 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:28:28.895115 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:28:28.895123 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:28:28.895130 kernel: ACPI: Added _OSI(Module Device) May 9 00:28:28.895138 kernel: ACPI: Added _OSI(Processor Device) May 9 00:28:28.895146 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:28:28.895153 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:28:28.895161 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 00:28:28.895168 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 9 00:28:28.895178 kernel: ACPI: Interpreter enabled May 9 00:28:28.895186 kernel: ACPI: PM: (supports S0 S3 S5) May 9 00:28:28.895194 kernel: ACPI: Using IOAPIC for interrupt routing May 9 00:28:28.895202 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 9 00:28:28.895209 kernel: PCI: Using E820 reservations for host bridge windows May 9 00:28:28.895217 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 9 00:28:28.895224 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:28:28.895441 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 00:28:28.895605 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 9 00:28:28.895738 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 9 00:28:28.895749 kernel: PCI host bridge to bus 0000:00 May 9 00:28:28.895897 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 9 00:28:28.896012 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 9 00:28:28.896128 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 9 00:28:28.896242 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 9 00:28:28.896362 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 9 00:28:28.896483 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 9 00:28:28.896623 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:28:28.896777 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 9 00:28:28.896930 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 9 00:28:28.897059 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 9 00:28:28.897189 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 9 00:28:28.897313 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 9 00:28:28.897446 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 9 00:28:28.897588 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 9 00:28:28.897733 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 9 00:28:28.897864 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 9 00:28:28.897989 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 9 00:28:28.898122 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 9 00:28:28.898261 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 9 00:28:28.898397 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 9 00:28:28.898523 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 9 00:28:28.898676 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 9 00:28:28.898830 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 9 00:28:28.898958 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 9 00:28:28.899089 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 9 00:28:28.899214 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 9 00:28:28.899341 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 9 00:28:28.899491 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 9 00:28:28.899642 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 9 00:28:28.899791 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 9 00:28:28.899915 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 9 00:28:28.900045 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 9 00:28:28.900184 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 9 00:28:28.900317 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 9 00:28:28.900329 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 9 00:28:28.900337 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 9 00:28:28.900345 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 9 00:28:28.900353 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 9 00:28:28.900361 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 9 00:28:28.900372 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 9 00:28:28.900387 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 9 00:28:28.900395 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 9 00:28:28.900403 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 9 00:28:28.900410 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 9 00:28:28.900418 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 9 00:28:28.900425 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 9 00:28:28.900433 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 9 00:28:28.900441 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 9 00:28:28.900451 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 9 00:28:28.900459 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 9 00:28:28.900467 kernel: iommu: Default domain type: Translated May 9 00:28:28.900475 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 9 00:28:28.900482 kernel: efivars: Registered efivars operations May 9 00:28:28.900490 kernel: PCI: Using ACPI for IRQ routing May 9 00:28:28.900498 kernel: PCI: pci_cache_line_size set to 64 bytes May 9 00:28:28.900505 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 9 00:28:28.900513 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 9 00:28:28.900523 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 9 00:28:28.900531 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 9 00:28:28.900676 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 9 00:28:28.900805 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 9 00:28:28.900929 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 9 00:28:28.900939 kernel: vgaarb: loaded May 9 00:28:28.900947 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 9 00:28:28.900955 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 9 00:28:28.900966 kernel: clocksource: Switched to clocksource kvm-clock May 9 00:28:28.900974 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:28:28.900982 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:28:28.900990 kernel: pnp: PnP ACPI init May 9 00:28:28.901137 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 9 00:28:28.901148 kernel: pnp: PnP ACPI: found 6 devices May 9 00:28:28.901157 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 9 00:28:28.901165 kernel: NET: Registered PF_INET protocol family May 9 00:28:28.901177 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 00:28:28.901185 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 00:28:28.901193 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:28:28.901201 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:28:28.901209 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 00:28:28.901217 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 00:28:28.901225 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:28:28.901233 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:28:28.901241 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:28:28.901252 kernel: NET: Registered PF_XDP protocol family May 9 00:28:28.901387 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 9 00:28:28.901515 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 9 00:28:28.901662 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 9 00:28:28.901781 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 9 00:28:28.901897 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 9 00:28:28.902011 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 9 00:28:28.902127 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 9 00:28:28.902254 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 9 00:28:28.902264 kernel: PCI: CLS 0 bytes, default 64 May 9 00:28:28.902272 kernel: Initialise system trusted keyrings May 9 00:28:28.902280 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 00:28:28.902288 kernel: Key type asymmetric registered May 9 00:28:28.902296 kernel: Asymmetric key parser 'x509' registered May 9 00:28:28.902303 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 9 00:28:28.902311 kernel: io scheduler mq-deadline registered May 9 00:28:28.902319 kernel: io scheduler kyber registered May 9 00:28:28.902330 kernel: io scheduler bfq registered May 9 00:28:28.902338 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 9 00:28:28.902346 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 9 00:28:28.902354 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 9 00:28:28.902362 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 9 00:28:28.902369 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:28:28.902386 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 9 00:28:28.902395 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 9 00:28:28.902403 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 9 00:28:28.902413 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 9 00:28:28.902566 kernel: rtc_cmos 00:04: RTC can wake from S4 May 9 00:28:28.902645 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 9 00:28:28.902767 kernel: rtc_cmos 00:04: registered as rtc0 May 9 00:28:28.902902 kernel: rtc_cmos 00:04: setting system clock to 2025-05-09T00:28:28 UTC (1746750508) May 9 00:28:28.903020 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 9 00:28:28.903031 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 9 00:28:28.903038 kernel: efifb: probing for efifb May 9 00:28:28.903052 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 9 00:28:28.903060 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 9 00:28:28.903068 kernel: efifb: scrolling: redraw May 9 00:28:28.903076 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 9 00:28:28.903083 kernel: Console: switching to colour frame buffer device 100x37 May 9 00:28:28.903091 kernel: fb0: EFI VGA frame buffer device May 9 00:28:28.903137 kernel: pstore: Using crash dump compression: deflate May 9 00:28:28.903159 kernel: pstore: Registered efi_pstore as persistent store backend May 9 00:28:28.903184 kernel: NET: Registered PF_INET6 protocol family May 9 00:28:28.903205 kernel: Segment Routing with IPv6 May 9 00:28:28.903213 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:28:28.903221 kernel: NET: Registered PF_PACKET protocol family May 9 00:28:28.903229 kernel: Key type dns_resolver registered May 9 00:28:28.903237 kernel: IPI shorthand broadcast: enabled May 9 00:28:28.903245 kernel: sched_clock: Marking stable (1205002000, 116674202)->(1347382394, -25706192) May 9 00:28:28.903253 kernel: registered taskstats version 1 May 9 00:28:28.903262 kernel: Loading compiled-in X.509 certificates May 9 00:28:28.903270 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: fe5c896a3ca06bb89ebdfb7ed85f611806e4c1cc' May 9 00:28:28.903281 kernel: Key type .fscrypt registered May 9 00:28:28.903289 kernel: Key type fscrypt-provisioning registered May 9 00:28:28.903298 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:28:28.903306 kernel: ima: Allocated hash algorithm: sha1 May 9 00:28:28.903314 kernel: ima: No architecture policies found May 9 00:28:28.903322 kernel: clk: Disabling unused clocks May 9 00:28:28.903330 kernel: Freeing unused kernel image (initmem) memory: 42864K May 9 00:28:28.903339 kernel: Write protecting the kernel read-only data: 36864k May 9 00:28:28.903347 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 9 00:28:28.903357 kernel: Run /init as init process May 9 00:28:28.903366 kernel: with arguments: May 9 00:28:28.903380 kernel: /init May 9 00:28:28.903388 kernel: with environment: May 9 00:28:28.903396 kernel: HOME=/ May 9 00:28:28.903404 kernel: TERM=linux May 9 00:28:28.903412 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:28:28.903422 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:28:28.903435 systemd[1]: Detected virtualization kvm. May 9 00:28:28.903444 systemd[1]: Detected architecture x86-64. May 9 00:28:28.903453 systemd[1]: Running in initrd. May 9 00:28:28.903461 systemd[1]: No hostname configured, using default hostname. May 9 00:28:28.903474 systemd[1]: Hostname set to . May 9 00:28:28.903483 systemd[1]: Initializing machine ID from VM UUID. May 9 00:28:28.903492 systemd[1]: Queued start job for default target initrd.target. May 9 00:28:28.903500 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:28:28.903509 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:28:28.903518 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:28:28.903527 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:28:28.903536 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:28:28.903547 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:28:28.903558 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:28:28.903567 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:28:28.903589 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:28:28.903598 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:28:28.903606 systemd[1]: Reached target paths.target - Path Units. May 9 00:28:28.903615 systemd[1]: Reached target slices.target - Slice Units. May 9 00:28:28.903626 systemd[1]: Reached target swap.target - Swaps. May 9 00:28:28.903635 systemd[1]: Reached target timers.target - Timer Units. May 9 00:28:28.903643 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:28:28.903652 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:28:28.903660 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:28:28.903669 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:28:28.903678 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:28:28.903686 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:28:28.903695 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:28:28.903706 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:28:28.903715 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:28:28.903724 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:28:28.903732 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:28:28.903741 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:28:28.903749 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:28:28.903758 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:28:28.903766 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:28:28.903778 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:28:28.903787 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:28:28.903795 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:28:28.903826 systemd-journald[194]: Collecting audit messages is disabled. May 9 00:28:28.903848 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:28:28.903857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:28:28.903866 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:28:28.903874 systemd-journald[194]: Journal started May 9 00:28:28.903895 systemd-journald[194]: Runtime Journal (/run/log/journal/1b51474a927c485295ca47bd73893d92) is 6.0M, max 48.3M, 42.2M free. May 9 00:28:28.905594 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:28:28.907197 systemd-modules-load[195]: Inserted module 'overlay' May 9 00:28:28.916774 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:28:28.917503 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:28:28.919079 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:28:28.931692 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:28:28.932011 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:28:28.939997 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:28:28.949603 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:28:28.951549 systemd-modules-load[195]: Inserted module 'br_netfilter' May 9 00:28:28.952498 kernel: Bridge firewalling registered May 9 00:28:28.955798 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:28:28.957012 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:28:28.960439 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:28:28.969668 dracut-cmdline[223]: dracut-dracut-053 May 9 00:28:28.972806 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:28:28.975402 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:28:28.983692 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:28:29.014496 systemd-resolved[242]: Positive Trust Anchors: May 9 00:28:29.014527 systemd-resolved[242]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:28:29.014560 systemd-resolved[242]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:28:29.017152 systemd-resolved[242]: Defaulting to hostname 'linux'. May 9 00:28:29.018345 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:28:29.024419 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:28:29.084609 kernel: SCSI subsystem initialized May 9 00:28:29.093602 kernel: Loading iSCSI transport class v2.0-870. May 9 00:28:29.104609 kernel: iscsi: registered transport (tcp) May 9 00:28:29.125603 kernel: iscsi: registered transport (qla4xxx) May 9 00:28:29.125623 kernel: QLogic iSCSI HBA Driver May 9 00:28:29.180857 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:28:29.190716 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:28:29.216770 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:28:29.216797 kernel: device-mapper: uevent: version 1.0.3 May 9 00:28:29.217794 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:28:29.259599 kernel: raid6: avx2x4 gen() 30155 MB/s May 9 00:28:29.276596 kernel: raid6: avx2x2 gen() 30829 MB/s May 9 00:28:29.293739 kernel: raid6: avx2x1 gen() 25848 MB/s May 9 00:28:29.293757 kernel: raid6: using algorithm avx2x2 gen() 30829 MB/s May 9 00:28:29.311705 kernel: raid6: .... xor() 19978 MB/s, rmw enabled May 9 00:28:29.311720 kernel: raid6: using avx2x2 recovery algorithm May 9 00:28:29.332603 kernel: xor: automatically using best checksumming function avx May 9 00:28:29.492610 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:28:29.507711 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:28:29.526843 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:28:29.543108 systemd-udevd[415]: Using default interface naming scheme 'v255'. May 9 00:28:29.549128 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:28:29.558728 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:28:29.573069 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation May 9 00:28:29.608817 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:28:29.620756 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:28:29.689214 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:28:29.702516 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:28:29.715550 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:28:29.718849 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:28:29.721485 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:28:29.722790 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:28:29.731646 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 9 00:28:29.734937 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 00:28:29.742736 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:28:29.747600 kernel: cryptd: max_cpu_qlen set to 1000 May 9 00:28:29.753657 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:28:29.753696 kernel: GPT:9289727 != 19775487 May 9 00:28:29.753707 kernel: libata version 3.00 loaded. May 9 00:28:29.753718 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:28:29.754593 kernel: GPT:9289727 != 19775487 May 9 00:28:29.755268 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:28:29.758267 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:28:29.758295 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:28:29.762614 kernel: ahci 0000:00:1f.2: version 3.0 May 9 00:28:29.762860 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 9 00:28:29.765608 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 9 00:28:29.765789 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 9 00:28:29.769263 kernel: scsi host0: ahci May 9 00:28:29.769470 kernel: scsi host1: ahci May 9 00:28:29.769704 kernel: scsi host2: ahci May 9 00:28:29.767110 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:28:29.767277 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:28:29.772561 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:28:29.775754 kernel: scsi host3: ahci May 9 00:28:29.773823 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:28:29.774005 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:28:29.782339 kernel: scsi host4: ahci May 9 00:28:29.782587 kernel: AVX2 version of gcm_enc/dec engaged. May 9 00:28:29.782601 kernel: scsi host5: ahci May 9 00:28:29.782758 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 9 00:28:29.782770 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 9 00:28:29.782780 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 9 00:28:29.784138 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 9 00:28:29.784164 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 9 00:28:29.785040 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 9 00:28:29.785134 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:28:29.789326 kernel: AES CTR mode by8 optimization enabled May 9 00:28:29.796913 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:28:29.804191 kernel: BTRFS: device fsid 8d57db23-a0fc-4362-9769-38fbda5747c1 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (466) May 9 00:28:29.804221 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (463) May 9 00:28:29.822506 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 00:28:29.831089 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 00:28:29.841932 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 00:28:29.845347 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 00:28:29.852500 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:28:29.866805 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:28:29.869458 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:28:29.869519 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:28:29.873372 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:28:29.876017 disk-uuid[556]: Primary Header is updated. May 9 00:28:29.876017 disk-uuid[556]: Secondary Entries is updated. May 9 00:28:29.876017 disk-uuid[556]: Secondary Header is updated. May 9 00:28:29.880138 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:28:29.882759 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:28:29.886610 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:28:29.917751 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:28:29.939789 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:28:29.961334 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:28:30.097740 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 9 00:28:30.097834 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 9 00:28:30.097864 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 9 00:28:30.099357 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 9 00:28:30.099595 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 9 00:28:30.100601 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 9 00:28:30.101604 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 9 00:28:30.101621 kernel: ata3.00: applying bridge limits May 9 00:28:30.102598 kernel: ata3.00: configured for UDMA/100 May 9 00:28:30.104607 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 9 00:28:30.153609 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 9 00:28:30.153877 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 9 00:28:30.167604 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 9 00:28:30.886992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:28:30.887477 disk-uuid[557]: The operation has completed successfully. May 9 00:28:30.920635 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:28:30.920770 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:28:30.944755 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:28:30.950952 sh[598]: Success May 9 00:28:30.965604 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 9 00:28:31.001349 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:28:31.018240 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:28:31.021932 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:28:31.047142 kernel: BTRFS info (device dm-0): first mount of filesystem 8d57db23-a0fc-4362-9769-38fbda5747c1 May 9 00:28:31.047183 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 9 00:28:31.047195 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:28:31.048190 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:28:31.048942 kernel: BTRFS info (device dm-0): using free space tree May 9 00:28:31.054093 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:28:31.055727 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:28:31.066701 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:28:31.068296 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:28:31.077985 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:28:31.078030 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:28:31.078041 kernel: BTRFS info (device vda6): using free space tree May 9 00:28:31.081652 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:28:31.091042 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:28:31.092819 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:28:31.104298 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:28:31.110724 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:28:31.181488 ignition[693]: Ignition 2.19.0 May 9 00:28:31.181502 ignition[693]: Stage: fetch-offline May 9 00:28:31.181539 ignition[693]: no configs at "/usr/lib/ignition/base.d" May 9 00:28:31.181550 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:28:31.181658 ignition[693]: parsed url from cmdline: "" May 9 00:28:31.181663 ignition[693]: no config URL provided May 9 00:28:31.181669 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:28:31.181678 ignition[693]: no config at "/usr/lib/ignition/user.ign" May 9 00:28:31.181708 ignition[693]: op(1): [started] loading QEMU firmware config module May 9 00:28:31.181714 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 00:28:31.190103 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:28:31.198073 ignition[693]: op(1): [finished] loading QEMU firmware config module May 9 00:28:31.199719 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:28:31.221505 systemd-networkd[788]: lo: Link UP May 9 00:28:31.221516 systemd-networkd[788]: lo: Gained carrier May 9 00:28:31.223217 systemd-networkd[788]: Enumeration completed May 9 00:28:31.223556 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:28:31.223665 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:28:31.223669 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:28:31.226235 systemd-networkd[788]: eth0: Link UP May 9 00:28:31.226239 systemd-networkd[788]: eth0: Gained carrier May 9 00:28:31.226247 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:28:31.227387 systemd[1]: Reached target network.target - Network. May 9 00:28:31.242615 systemd-networkd[788]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:28:31.254240 ignition[693]: parsing config with SHA512: 5f3dfb6c155e814018a0489eedd31d2b4b53f2d11d26d6a62a1f173dc0904334cf9fc14b774176bfff46dd00e18c986f91183bd7bcbb683f8538c673f380bd19 May 9 00:28:31.257725 unknown[693]: fetched base config from "system" May 9 00:28:31.257739 unknown[693]: fetched user config from "qemu" May 9 00:28:31.258081 ignition[693]: fetch-offline: fetch-offline passed May 9 00:28:31.258142 ignition[693]: Ignition finished successfully May 9 00:28:31.260554 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:28:31.262692 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 00:28:31.272963 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:28:31.285823 ignition[792]: Ignition 2.19.0 May 9 00:28:31.285836 ignition[792]: Stage: kargs May 9 00:28:31.286007 ignition[792]: no configs at "/usr/lib/ignition/base.d" May 9 00:28:31.286019 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:28:31.289821 ignition[792]: kargs: kargs passed May 9 00:28:31.289876 ignition[792]: Ignition finished successfully May 9 00:28:31.294348 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:28:31.303805 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:28:31.317266 ignition[800]: Ignition 2.19.0 May 9 00:28:31.317277 ignition[800]: Stage: disks May 9 00:28:31.317440 ignition[800]: no configs at "/usr/lib/ignition/base.d" May 9 00:28:31.317451 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:28:31.321265 ignition[800]: disks: disks passed May 9 00:28:31.321324 ignition[800]: Ignition finished successfully May 9 00:28:31.324923 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:28:31.327109 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:28:31.327197 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:28:31.329337 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:28:31.332712 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:28:31.334631 systemd[1]: Reached target basic.target - Basic System. May 9 00:28:31.345693 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:28:31.357540 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 00:28:31.364221 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:28:31.378693 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:28:31.464598 kernel: EXT4-fs (vda9): mounted filesystem 4cb03022-f5a4-4664-b5b4-bc39fcc2f946 r/w with ordered data mode. Quota mode: none. May 9 00:28:31.465398 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:28:31.467624 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:28:31.476660 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:28:31.478537 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:28:31.479829 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 00:28:31.479872 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:28:31.488360 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (818) May 9 00:28:31.488388 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:28:31.488401 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:28:31.479895 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:28:31.493920 kernel: BTRFS info (device vda6): using free space tree May 9 00:28:31.493936 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:28:31.487134 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:28:31.491723 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:28:31.494983 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:28:31.529590 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:28:31.533706 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory May 9 00:28:31.539011 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:28:31.542533 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:28:31.627662 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:28:31.633747 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:28:31.637016 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:28:31.643604 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:28:31.664030 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:28:31.678343 ignition[934]: INFO : Ignition 2.19.0 May 9 00:28:31.678343 ignition[934]: INFO : Stage: mount May 9 00:28:31.680039 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:28:31.680039 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:28:31.680039 ignition[934]: INFO : mount: mount passed May 9 00:28:31.680039 ignition[934]: INFO : Ignition finished successfully May 9 00:28:31.681997 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:28:31.688708 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:28:32.046338 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:28:32.055803 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:28:32.063607 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (947) May 9 00:28:32.065675 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:28:32.065693 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:28:32.065704 kernel: BTRFS info (device vda6): using free space tree May 9 00:28:32.068603 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:28:32.069946 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:28:32.099901 ignition[964]: INFO : Ignition 2.19.0 May 9 00:28:32.099901 ignition[964]: INFO : Stage: files May 9 00:28:32.101646 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:28:32.101646 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:28:32.101646 ignition[964]: DEBUG : files: compiled without relabeling support, skipping May 9 00:28:32.101646 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:28:32.101646 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:28:32.107857 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:28:32.109198 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:28:32.110951 unknown[964]: wrote ssh authorized keys file for user: core May 9 00:28:32.112064 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:28:32.114439 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 00:28:32.116361 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 9 00:28:32.156920 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 00:28:32.226114 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 00:28:32.228533 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 9 00:28:32.228533 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:28:32.228533 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 00:28:32.228533 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 00:28:32.228533 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:28:32.228533 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:28:32.228533 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:28:32.228533 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:28:32.228533 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:28:32.228533 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:28:32.228533 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 9 00:28:32.228533 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 9 00:28:32.228533 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 9 00:28:32.228533 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 9 00:28:32.520270 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 9 00:28:33.081779 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 9 00:28:33.081779 ignition[964]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 9 00:28:33.085824 ignition[964]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:28:33.085824 ignition[964]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:28:33.085824 ignition[964]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 9 00:28:33.085824 ignition[964]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 9 00:28:33.085824 ignition[964]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:28:33.085824 ignition[964]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:28:33.085824 ignition[964]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 9 00:28:33.085824 ignition[964]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 9 00:28:33.107568 ignition[964]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:28:33.112410 ignition[964]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:28:33.114310 ignition[964]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 9 00:28:33.114310 ignition[964]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 9 00:28:33.114310 ignition[964]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 9 00:28:33.114310 ignition[964]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:28:33.114310 ignition[964]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:28:33.114310 ignition[964]: INFO : files: files passed May 9 00:28:33.114310 ignition[964]: INFO : Ignition finished successfully May 9 00:28:33.115258 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:28:33.127709 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:28:33.129467 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:28:33.131810 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:28:33.131923 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:28:33.139590 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory May 9 00:28:33.142180 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:28:33.143987 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:28:33.145609 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:28:33.144746 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:28:33.147403 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:28:33.160704 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:28:33.170700 systemd-networkd[788]: eth0: Gained IPv6LL May 9 00:28:33.190362 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:28:33.190492 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:28:33.192845 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:28:33.195004 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:28:33.197042 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:28:33.197845 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:28:33.215397 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:28:33.225696 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:28:33.234756 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:28:33.237103 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:28:33.238390 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:28:33.240348 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:28:33.240470 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:28:33.242846 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:28:33.244408 systemd[1]: Stopped target basic.target - Basic System. May 9 00:28:33.246463 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:28:33.248787 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:28:33.250830 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:28:33.253119 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:28:33.255210 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:28:33.257554 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:28:33.259602 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:28:33.261818 systemd[1]: Stopped target swap.target - Swaps. May 9 00:28:33.263661 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:28:33.263796 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:28:33.266139 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:28:33.267618 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:28:33.269784 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:28:33.269882 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:28:33.272170 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:28:33.272292 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:28:33.274710 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:28:33.274840 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:28:33.276723 systemd[1]: Stopped target paths.target - Path Units. May 9 00:28:33.278497 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:28:33.281661 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:28:33.283224 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:28:33.285171 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:28:33.287264 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:28:33.287374 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:28:33.289128 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:28:33.289218 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:28:33.291234 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:28:33.291363 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:28:33.293961 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:28:33.294068 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:28:33.306757 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:28:33.309422 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:28:33.310337 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:28:33.310456 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:28:33.312643 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:28:33.312823 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:28:33.319355 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:28:33.319479 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:28:33.325408 ignition[1018]: INFO : Ignition 2.19.0 May 9 00:28:33.325408 ignition[1018]: INFO : Stage: umount May 9 00:28:33.325408 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:28:33.325408 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:28:33.325408 ignition[1018]: INFO : umount: umount passed May 9 00:28:33.325408 ignition[1018]: INFO : Ignition finished successfully May 9 00:28:33.323968 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:28:33.324107 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:28:33.326036 systemd[1]: Stopped target network.target - Network. May 9 00:28:33.326317 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:28:33.326384 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:28:33.326895 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:28:33.326942 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:28:33.327234 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:28:33.327289 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:28:33.327590 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:28:33.327636 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:28:33.328390 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:28:33.329021 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:28:33.335680 systemd-networkd[788]: eth0: DHCPv6 lease lost May 9 00:28:33.337930 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:28:33.338074 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:28:33.340959 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:28:33.342288 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:28:33.342453 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:28:33.346906 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:28:33.346966 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:28:33.356719 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:28:33.358110 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:28:33.358181 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:28:33.360628 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:28:33.360685 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:28:33.362746 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:28:33.362801 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:28:33.364923 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:28:33.364976 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:28:33.367440 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:28:33.380784 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:28:33.380916 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:28:33.386562 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:28:33.386755 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:28:33.388702 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:28:33.388754 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:28:33.390418 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:28:33.390461 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:28:33.392723 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:28:33.392775 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:28:33.394899 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:28:33.394949 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:28:33.396978 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:28:33.397030 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:28:33.406744 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:28:33.408998 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:28:33.409066 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:28:33.411424 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 00:28:33.412746 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:28:33.416624 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:28:33.417612 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:28:33.420064 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:28:33.421089 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:28:33.423734 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:28:33.424880 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:28:33.493101 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:28:33.493232 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:28:33.495220 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:28:33.495955 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:28:33.496009 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:28:33.519700 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:28:33.529782 systemd[1]: Switching root. May 9 00:28:33.564471 systemd-journald[194]: Journal stopped May 9 00:28:34.613344 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). May 9 00:28:34.613416 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:28:34.613433 kernel: SELinux: policy capability open_perms=1 May 9 00:28:34.613444 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:28:34.613455 kernel: SELinux: policy capability always_check_network=0 May 9 00:28:34.613470 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:28:34.613486 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:28:34.613496 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:28:34.613507 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:28:34.613518 kernel: audit: type=1403 audit(1746750513.890:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:28:34.613547 systemd[1]: Successfully loaded SELinux policy in 40.050ms. May 9 00:28:34.613568 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.696ms. May 9 00:28:34.613597 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:28:34.613615 systemd[1]: Detected virtualization kvm. May 9 00:28:34.613627 systemd[1]: Detected architecture x86-64. May 9 00:28:34.613638 systemd[1]: Detected first boot. May 9 00:28:34.613650 systemd[1]: Initializing machine ID from VM UUID. May 9 00:28:34.613662 zram_generator::config[1064]: No configuration found. May 9 00:28:34.613680 systemd[1]: Populated /etc with preset unit settings. May 9 00:28:34.613692 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 00:28:34.613704 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 00:28:34.613716 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 00:28:34.613731 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:28:34.613742 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:28:34.613754 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:28:34.613766 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:28:34.613778 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:28:34.613793 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:28:34.613805 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:28:34.613817 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:28:34.613831 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:28:34.613843 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:28:34.613856 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:28:34.613867 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:28:34.613885 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:28:34.613897 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:28:34.613908 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 00:28:34.613921 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:28:34.613932 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 00:28:34.613947 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 00:28:34.613959 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 00:28:34.613971 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:28:34.613982 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:28:34.613994 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:28:34.614005 systemd[1]: Reached target slices.target - Slice Units. May 9 00:28:34.614017 systemd[1]: Reached target swap.target - Swaps. May 9 00:28:34.614029 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:28:34.614043 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:28:34.614058 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:28:34.614070 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:28:34.614082 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:28:34.614099 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:28:34.614121 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:28:34.614135 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:28:34.614147 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:28:34.614159 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:28:34.614184 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:28:34.614197 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:28:34.614209 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:28:34.614221 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:28:34.614233 systemd[1]: Reached target machines.target - Containers. May 9 00:28:34.614251 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:28:34.614264 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:28:34.614275 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:28:34.614291 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:28:34.614303 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:28:34.614315 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:28:34.614326 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:28:34.614338 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:28:34.614354 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:28:34.614366 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:28:34.614378 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 00:28:34.614390 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 00:28:34.614405 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 00:28:34.614416 systemd[1]: Stopped systemd-fsck-usr.service. May 9 00:28:34.614428 kernel: loop: module loaded May 9 00:28:34.614441 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:28:34.614454 kernel: fuse: init (API version 7.39) May 9 00:28:34.614467 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:28:34.614479 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:28:34.614491 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:28:34.614523 systemd-journald[1138]: Collecting audit messages is disabled. May 9 00:28:34.614547 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:28:34.614559 kernel: ACPI: bus type drm_connector registered May 9 00:28:34.614571 systemd-journald[1138]: Journal started May 9 00:28:34.615962 systemd-journald[1138]: Runtime Journal (/run/log/journal/1b51474a927c485295ca47bd73893d92) is 6.0M, max 48.3M, 42.2M free. May 9 00:28:34.616005 systemd[1]: verity-setup.service: Deactivated successfully. May 9 00:28:34.616029 systemd[1]: Stopped verity-setup.service. May 9 00:28:34.396026 systemd[1]: Queued start job for default target multi-user.target. May 9 00:28:34.415614 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 00:28:34.416055 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 00:28:34.619645 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:28:34.628626 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:28:34.630209 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:28:34.631395 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:28:34.632907 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:28:34.634010 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:28:34.635262 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:28:34.636484 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:28:34.637780 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:28:34.639228 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:28:34.640802 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:28:34.640977 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:28:34.642463 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:28:34.642647 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:28:34.644091 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:28:34.644271 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:28:34.645703 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:28:34.645870 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:28:34.647412 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:28:34.647634 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:28:34.649041 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:28:34.649207 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:28:34.650598 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:28:34.652003 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:28:34.653543 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:28:34.669439 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:28:34.678677 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:28:34.680891 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:28:34.682035 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:28:34.682067 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:28:34.684075 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:28:34.686372 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:28:34.691500 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:28:34.692712 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:28:34.694884 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:28:34.701485 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:28:34.702820 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:28:34.707012 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:28:34.708190 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:28:34.710589 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:28:34.710817 systemd-journald[1138]: Time spent on flushing to /var/log/journal/1b51474a927c485295ca47bd73893d92 is 14.871ms for 993 entries. May 9 00:28:34.710817 systemd-journald[1138]: System Journal (/var/log/journal/1b51474a927c485295ca47bd73893d92) is 8.0M, max 195.6M, 187.6M free. May 9 00:28:34.735321 systemd-journald[1138]: Received client request to flush runtime journal. May 9 00:28:34.714067 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:28:34.717220 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:28:34.718927 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:28:34.720405 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:28:34.722045 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:28:34.738073 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:28:34.742728 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:28:34.748918 kernel: loop0: detected capacity change from 0 to 205544 May 9 00:28:34.754956 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:28:34.756980 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:28:34.759261 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:28:34.763790 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:28:34.765361 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:28:34.770934 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 00:28:34.774882 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:28:34.776993 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. May 9 00:28:34.777012 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. May 9 00:28:34.783683 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:28:34.791011 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:28:34.795033 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:28:34.795736 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:28:34.808608 kernel: loop1: detected capacity change from 0 to 142488 May 9 00:28:34.822836 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:28:34.828758 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:28:34.847615 kernel: loop2: detected capacity change from 0 to 140768 May 9 00:28:34.852564 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. May 9 00:28:34.852641 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. May 9 00:28:34.858805 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:28:34.884602 kernel: loop3: detected capacity change from 0 to 205544 May 9 00:28:34.893601 kernel: loop4: detected capacity change from 0 to 142488 May 9 00:28:34.906838 kernel: loop5: detected capacity change from 0 to 140768 May 9 00:28:34.915170 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 00:28:34.915802 (sd-merge)[1205]: Merged extensions into '/usr'. May 9 00:28:34.919663 systemd[1]: Reloading requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:28:34.919679 systemd[1]: Reloading... May 9 00:28:34.984615 zram_generator::config[1231]: No configuration found. May 9 00:28:35.042605 ldconfig[1173]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:28:35.115739 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:28:35.171125 systemd[1]: Reloading finished in 250 ms. May 9 00:28:35.204741 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:28:35.206782 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:28:35.220770 systemd[1]: Starting ensure-sysext.service... May 9 00:28:35.223169 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:28:35.232206 systemd[1]: Reloading requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... May 9 00:28:35.232235 systemd[1]: Reloading... May 9 00:28:35.248633 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:28:35.249025 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:28:35.250052 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:28:35.250384 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 9 00:28:35.250471 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 9 00:28:35.275882 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:28:35.275897 systemd-tmpfiles[1270]: Skipping /boot May 9 00:28:35.288615 zram_generator::config[1298]: No configuration found. May 9 00:28:35.291748 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:28:35.291763 systemd-tmpfiles[1270]: Skipping /boot May 9 00:28:35.401449 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:28:35.451072 systemd[1]: Reloading finished in 218 ms. May 9 00:28:35.468292 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:28:35.482004 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:28:35.490870 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:28:35.493478 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:28:35.496088 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:28:35.500957 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:28:35.506020 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:28:35.509110 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:28:35.513971 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:28:35.514153 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:28:35.518041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:28:35.523799 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:28:35.529434 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:28:35.530702 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:28:35.533395 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:28:35.534638 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:28:35.536480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:28:35.536792 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:28:35.538656 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:28:35.540763 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:28:35.540944 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:28:35.541679 systemd-udevd[1342]: Using default interface naming scheme 'v255'. May 9 00:28:35.543140 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:28:35.543322 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:28:35.554510 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:28:35.555852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:28:35.557353 augenrules[1364]: No rules May 9 00:28:35.565949 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:28:35.570535 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:28:35.576630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:28:35.580256 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:28:35.584024 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:28:35.585122 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:28:35.586340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:28:35.589330 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:28:35.591357 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:28:35.593534 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:28:35.593736 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:28:35.595910 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:28:35.596091 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:28:35.598071 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:28:35.598264 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:28:35.603954 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:28:35.605966 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:28:35.607942 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:28:35.637756 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:28:35.638765 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1387) May 9 00:28:35.638449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:28:35.649513 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:28:35.655240 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:28:35.659084 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:28:35.669115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:28:35.670407 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:28:35.674424 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:28:35.675547 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:28:35.675651 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:28:35.677717 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:28:35.677965 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:28:35.680264 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:28:35.680451 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:28:35.682507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:28:35.682891 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:28:35.684941 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:28:35.685120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:28:35.690270 systemd[1]: Finished ensure-sysext.service. May 9 00:28:35.709610 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 9 00:28:35.715918 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 9 00:28:35.717762 systemd-resolved[1340]: Positive Trust Anchors: May 9 00:28:35.718097 systemd-resolved[1340]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:28:35.718131 systemd-resolved[1340]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:28:35.719835 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:28:35.725467 systemd-resolved[1340]: Defaulting to hostname 'linux'. May 9 00:28:35.729603 kernel: ACPI: button: Power Button [PWRF] May 9 00:28:35.733861 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:28:35.735292 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:28:35.735364 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:28:35.740534 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 00:28:35.740684 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 9 00:28:35.741912 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:28:35.743435 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:28:35.765841 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:28:35.767792 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:28:35.792456 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 9 00:28:35.792768 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 9 00:28:35.792949 kernel: mousedev: PS/2 mouse device common for all mice May 9 00:28:35.796198 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 9 00:28:35.796446 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 9 00:28:35.798837 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:28:35.799106 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:28:35.804525 systemd-networkd[1412]: lo: Link UP May 9 00:28:35.804539 systemd-networkd[1412]: lo: Gained carrier May 9 00:28:35.806276 systemd-networkd[1412]: Enumeration completed May 9 00:28:35.807743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:28:35.808975 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:28:35.810073 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:28:35.810086 systemd-networkd[1412]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:28:35.810347 systemd[1]: Reached target network.target - Network. May 9 00:28:35.813799 systemd-networkd[1412]: eth0: Link UP May 9 00:28:35.813809 systemd-networkd[1412]: eth0: Gained carrier May 9 00:28:35.813822 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:28:35.813988 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:28:35.840106 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 00:28:35.840669 systemd-networkd[1412]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:28:35.841546 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:28:35.842144 systemd-timesyncd[1424]: Network configuration changed, trying to establish connection. May 9 00:28:36.997453 systemd-resolved[1340]: Clock change detected. Flushing caches. May 9 00:28:36.997551 systemd-timesyncd[1424]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 00:28:36.997600 systemd-timesyncd[1424]: Initial clock synchronization to Fri 2025-05-09 00:28:36.997345 UTC. May 9 00:28:37.033091 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:28:37.056026 kernel: kvm_amd: TSC scaling supported May 9 00:28:37.056065 kernel: kvm_amd: Nested Virtualization enabled May 9 00:28:37.056079 kernel: kvm_amd: Nested Paging enabled May 9 00:28:37.057017 kernel: kvm_amd: LBR virtualization supported May 9 00:28:37.057032 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 9 00:28:37.058042 kernel: kvm_amd: Virtual GIF supported May 9 00:28:37.076909 kernel: EDAC MC: Ver: 3.0.0 May 9 00:28:37.107163 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:28:37.121047 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:28:37.130528 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:28:37.157350 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:28:37.158959 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:28:37.160152 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:28:37.161386 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:28:37.162719 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:28:37.164236 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:28:37.165487 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:28:37.166813 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:28:37.168109 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:28:37.168139 systemd[1]: Reached target paths.target - Path Units. May 9 00:28:37.169104 systemd[1]: Reached target timers.target - Timer Units. May 9 00:28:37.170944 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:28:37.173743 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:28:37.183349 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:28:37.185759 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:28:37.187370 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:28:37.188624 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:28:37.189639 systemd[1]: Reached target basic.target - Basic System. May 9 00:28:37.190678 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:28:37.190704 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:28:37.191712 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:28:37.193897 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:28:37.195975 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:28:37.198057 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:28:37.200275 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:28:37.201647 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:28:37.204041 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:28:37.207056 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 00:28:37.209170 jq[1451]: false May 9 00:28:37.213527 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:28:37.218676 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:28:37.224000 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:28:37.225621 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:28:37.226127 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:28:37.228451 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:28:37.232069 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:28:37.234363 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:28:37.234788 dbus-daemon[1450]: [system] SELinux support is enabled May 9 00:28:37.235828 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:28:37.236668 extend-filesystems[1452]: Found loop3 May 9 00:28:37.239411 extend-filesystems[1452]: Found loop4 May 9 00:28:37.239411 extend-filesystems[1452]: Found loop5 May 9 00:28:37.239411 extend-filesystems[1452]: Found sr0 May 9 00:28:37.239411 extend-filesystems[1452]: Found vda May 9 00:28:37.239411 extend-filesystems[1452]: Found vda1 May 9 00:28:37.239411 extend-filesystems[1452]: Found vda2 May 9 00:28:37.239411 extend-filesystems[1452]: Found vda3 May 9 00:28:37.239411 extend-filesystems[1452]: Found usr May 9 00:28:37.239411 extend-filesystems[1452]: Found vda4 May 9 00:28:37.239411 extend-filesystems[1452]: Found vda6 May 9 00:28:37.239411 extend-filesystems[1452]: Found vda7 May 9 00:28:37.239411 extend-filesystems[1452]: Found vda9 May 9 00:28:37.239411 extend-filesystems[1452]: Checking size of /dev/vda9 May 9 00:28:37.255180 update_engine[1464]: I20250509 00:28:37.250244 1464 main.cc:92] Flatcar Update Engine starting May 9 00:28:37.255180 update_engine[1464]: I20250509 00:28:37.251528 1464 update_check_scheduler.cc:74] Next update check in 5m6s May 9 00:28:37.255334 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:28:37.255574 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:28:37.255966 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:28:37.256172 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:28:37.258792 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:28:37.259061 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:28:37.269465 jq[1466]: true May 9 00:28:37.275106 extend-filesystems[1452]: Resized partition /dev/vda9 May 9 00:28:37.276144 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) May 9 00:28:37.282588 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 00:28:37.282626 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1387) May 9 00:28:37.275487 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:28:37.282706 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:28:37.282735 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:28:37.284315 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:28:37.284331 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:28:37.286148 systemd[1]: Started update-engine.service - Update Engine. May 9 00:28:37.290048 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:28:37.303465 systemd-logind[1463]: Watching system buttons on /dev/input/event1 (Power Button) May 9 00:28:37.308040 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 9 00:28:37.309159 systemd-logind[1463]: New seat seat0. May 9 00:28:37.314248 tar[1474]: linux-amd64/helm May 9 00:28:37.314477 jq[1483]: true May 9 00:28:37.329617 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 00:28:37.326216 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:28:37.344116 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:28:37.357286 extend-filesystems[1482]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 00:28:37.357286 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 00:28:37.357286 extend-filesystems[1482]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 00:28:37.361728 extend-filesystems[1452]: Resized filesystem in /dev/vda9 May 9 00:28:37.361377 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:28:37.361656 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:28:37.365697 bash[1508]: Updated "/home/core/.ssh/authorized_keys" May 9 00:28:37.368194 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:28:37.371352 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 00:28:37.486727 containerd[1475]: time="2025-05-09T00:28:37.486616474Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 00:28:37.513126 containerd[1475]: time="2025-05-09T00:28:37.513056779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:28:37.515228 containerd[1475]: time="2025-05-09T00:28:37.515186543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:28:37.515371 containerd[1475]: time="2025-05-09T00:28:37.515356743Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:28:37.515424 containerd[1475]: time="2025-05-09T00:28:37.515410574Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:28:37.515689 containerd[1475]: time="2025-05-09T00:28:37.515671213Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:28:37.515759 containerd[1475]: time="2025-05-09T00:28:37.515745021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:28:37.515879 containerd[1475]: time="2025-05-09T00:28:37.515861680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:28:37.515948 containerd[1475]: time="2025-05-09T00:28:37.515934346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:28:37.516224 containerd[1475]: time="2025-05-09T00:28:37.516202950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:28:37.516290 containerd[1475]: time="2025-05-09T00:28:37.516277079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:28:37.516352 containerd[1475]: time="2025-05-09T00:28:37.516336340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:28:37.516396 containerd[1475]: time="2025-05-09T00:28:37.516384180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:28:37.516555 containerd[1475]: time="2025-05-09T00:28:37.516537598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:28:37.516909 containerd[1475]: time="2025-05-09T00:28:37.516870482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:28:37.517140 containerd[1475]: time="2025-05-09T00:28:37.517119128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:28:37.517195 containerd[1475]: time="2025-05-09T00:28:37.517182407Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:28:37.517337 containerd[1475]: time="2025-05-09T00:28:37.517321197Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:28:37.517476 containerd[1475]: time="2025-05-09T00:28:37.517438247Z" level=info msg="metadata content store policy set" policy=shared May 9 00:28:37.523309 containerd[1475]: time="2025-05-09T00:28:37.523288520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:28:37.525379 containerd[1475]: time="2025-05-09T00:28:37.523377277Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:28:37.525379 containerd[1475]: time="2025-05-09T00:28:37.523396954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:28:37.525379 containerd[1475]: time="2025-05-09T00:28:37.523428643Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:28:37.525379 containerd[1475]: time="2025-05-09T00:28:37.523447028Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:28:37.525379 containerd[1475]: time="2025-05-09T00:28:37.523586750Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:28:37.525379 containerd[1475]: time="2025-05-09T00:28:37.523794309Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:28:37.525379 containerd[1475]: time="2025-05-09T00:28:37.523922189Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:28:37.525379 containerd[1475]: time="2025-05-09T00:28:37.523937728Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:28:37.525379 containerd[1475]: time="2025-05-09T00:28:37.523961904Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:28:37.525379 containerd[1475]: time="2025-05-09T00:28:37.523980228Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:28:37.525379 containerd[1475]: time="2025-05-09T00:28:37.523994825Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:28:37.525379 containerd[1475]: time="2025-05-09T00:28:37.524007800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:28:37.525379 containerd[1475]: time="2025-05-09T00:28:37.524020634Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:28:37.525379 containerd[1475]: time="2025-05-09T00:28:37.524032997Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:28:37.525680 containerd[1475]: time="2025-05-09T00:28:37.524045320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:28:37.525680 containerd[1475]: time="2025-05-09T00:28:37.524057443Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:28:37.525680 containerd[1475]: time="2025-05-09T00:28:37.524068784Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:28:37.525680 containerd[1475]: time="2025-05-09T00:28:37.524087619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525680 containerd[1475]: time="2025-05-09T00:28:37.524100233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525680 containerd[1475]: time="2025-05-09T00:28:37.524111915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525680 containerd[1475]: time="2025-05-09T00:28:37.524134267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525680 containerd[1475]: time="2025-05-09T00:28:37.524147361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525680 containerd[1475]: time="2025-05-09T00:28:37.524160336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525680 containerd[1475]: time="2025-05-09T00:28:37.524171587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525680 containerd[1475]: time="2025-05-09T00:28:37.524188959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525680 containerd[1475]: time="2025-05-09T00:28:37.524204228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525680 containerd[1475]: time="2025-05-09T00:28:37.524217393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525680 containerd[1475]: time="2025-05-09T00:28:37.524229395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525960 containerd[1475]: time="2025-05-09T00:28:37.524245205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525960 containerd[1475]: time="2025-05-09T00:28:37.524256616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525960 containerd[1475]: time="2025-05-09T00:28:37.524270052Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:28:37.525960 containerd[1475]: time="2025-05-09T00:28:37.524288496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525960 containerd[1475]: time="2025-05-09T00:28:37.524299667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525960 containerd[1475]: time="2025-05-09T00:28:37.524309536Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:28:37.525960 containerd[1475]: time="2025-05-09T00:28:37.524351394Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:28:37.525960 containerd[1475]: time="2025-05-09T00:28:37.524365531Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:28:37.525960 containerd[1475]: time="2025-05-09T00:28:37.524375219Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:28:37.525960 containerd[1475]: time="2025-05-09T00:28:37.524386750Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:28:37.525960 containerd[1475]: time="2025-05-09T00:28:37.524396328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:28:37.525960 containerd[1475]: time="2025-05-09T00:28:37.524407559Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:28:37.525960 containerd[1475]: time="2025-05-09T00:28:37.524417879Z" level=info msg="NRI interface is disabled by configuration." May 9 00:28:37.525960 containerd[1475]: time="2025-05-09T00:28:37.524427567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:28:37.526216 containerd[1475]: time="2025-05-09T00:28:37.524683788Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:28:37.526216 containerd[1475]: time="2025-05-09T00:28:37.524737999Z" level=info msg="Connect containerd service" May 9 00:28:37.526216 containerd[1475]: time="2025-05-09T00:28:37.524773897Z" level=info msg="using legacy CRI server" May 9 00:28:37.526216 containerd[1475]: time="2025-05-09T00:28:37.524780339Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:28:37.526216 containerd[1475]: time="2025-05-09T00:28:37.524880346Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:28:37.526216 containerd[1475]: time="2025-05-09T00:28:37.525495310Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:28:37.526216 containerd[1475]: time="2025-05-09T00:28:37.525811393Z" level=info msg="Start subscribing containerd event" May 9 00:28:37.526216 containerd[1475]: time="2025-05-09T00:28:37.525872668Z" level=info msg="Start recovering state" May 9 00:28:37.526216 containerd[1475]: time="2025-05-09T00:28:37.525961765Z" level=info msg="Start event monitor" May 9 00:28:37.526216 containerd[1475]: time="2025-05-09T00:28:37.525984457Z" level=info msg="Start snapshots syncer" May 9 00:28:37.526216 containerd[1475]: time="2025-05-09T00:28:37.525994175Z" level=info msg="Start cni network conf syncer for default" May 9 00:28:37.526216 containerd[1475]: time="2025-05-09T00:28:37.526003152Z" level=info msg="Start streaming server" May 9 00:28:37.526540 containerd[1475]: time="2025-05-09T00:28:37.526383035Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:28:37.526540 containerd[1475]: time="2025-05-09T00:28:37.526446193Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:28:37.527232 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:28:37.528931 containerd[1475]: time="2025-05-09T00:28:37.528727563Z" level=info msg="containerd successfully booted in 0.043297s" May 9 00:28:37.654577 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:28:37.679143 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:28:37.686094 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:28:37.696009 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:28:37.696285 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:28:37.704061 tar[1474]: linux-amd64/LICENSE May 9 00:28:37.704166 tar[1474]: linux-amd64/README.md May 9 00:28:37.705207 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:28:37.718408 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:28:37.720152 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 00:28:37.736290 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:28:37.738855 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 00:28:37.740166 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:28:38.612075 systemd-networkd[1412]: eth0: Gained IPv6LL May 9 00:28:38.615615 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:28:38.617468 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:28:38.628109 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:28:38.630597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:28:38.632814 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:28:38.652538 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:28:38.652804 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:28:38.654464 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:28:38.658448 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:28:39.639765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:28:39.641625 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:28:39.643111 systemd[1]: Startup finished in 1.338s (kernel) + 5.184s (initrd) + 4.637s (userspace) = 11.160s. May 9 00:28:39.648315 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:28:40.143298 kubelet[1562]: E0509 00:28:40.143237 1562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:28:40.147608 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:28:40.147847 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:28:40.148267 systemd[1]: kubelet.service: Consumed 1.395s CPU time. May 9 00:28:42.871599 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:28:42.873081 systemd[1]: Started sshd@0-10.0.0.48:22-10.0.0.1:50546.service - OpenSSH per-connection server daemon (10.0.0.1:50546). May 9 00:28:43.050180 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 50546 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:28:43.052164 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:28:43.061351 systemd-logind[1463]: New session 1 of user core. May 9 00:28:43.062801 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:28:43.072122 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:28:43.086231 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:28:43.095397 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:28:43.101204 (systemd)[1579]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:28:43.207303 systemd[1579]: Queued start job for default target default.target. May 9 00:28:43.216268 systemd[1579]: Created slice app.slice - User Application Slice. May 9 00:28:43.216295 systemd[1579]: Reached target paths.target - Paths. May 9 00:28:43.216309 systemd[1579]: Reached target timers.target - Timers. May 9 00:28:43.217986 systemd[1579]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:28:43.229401 systemd[1579]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:28:43.229538 systemd[1579]: Reached target sockets.target - Sockets. May 9 00:28:43.229558 systemd[1579]: Reached target basic.target - Basic System. May 9 00:28:43.229598 systemd[1579]: Reached target default.target - Main User Target. May 9 00:28:43.229633 systemd[1579]: Startup finished in 120ms. May 9 00:28:43.230532 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:28:43.232954 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:28:43.297101 systemd[1]: Started sshd@1-10.0.0.48:22-10.0.0.1:50562.service - OpenSSH per-connection server daemon (10.0.0.1:50562). May 9 00:28:43.327426 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 50562 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:28:43.328986 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:28:43.333256 systemd-logind[1463]: New session 2 of user core. May 9 00:28:43.348091 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:28:43.401394 sshd[1590]: pam_unix(sshd:session): session closed for user core May 9 00:28:43.418490 systemd[1]: sshd@1-10.0.0.48:22-10.0.0.1:50562.service: Deactivated successfully. May 9 00:28:43.420098 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:28:43.421702 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. May 9 00:28:43.423018 systemd[1]: Started sshd@2-10.0.0.48:22-10.0.0.1:50564.service - OpenSSH per-connection server daemon (10.0.0.1:50564). May 9 00:28:43.423730 systemd-logind[1463]: Removed session 2. May 9 00:28:43.453479 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 50564 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:28:43.455002 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:28:43.458635 systemd-logind[1463]: New session 3 of user core. May 9 00:28:43.466998 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:28:43.517568 sshd[1597]: pam_unix(sshd:session): session closed for user core May 9 00:28:43.529631 systemd[1]: sshd@2-10.0.0.48:22-10.0.0.1:50564.service: Deactivated successfully. May 9 00:28:43.531500 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:28:43.533195 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. May 9 00:28:43.542138 systemd[1]: Started sshd@3-10.0.0.48:22-10.0.0.1:50566.service - OpenSSH per-connection server daemon (10.0.0.1:50566). May 9 00:28:43.543007 systemd-logind[1463]: Removed session 3. May 9 00:28:43.568449 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 50566 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:28:43.570119 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:28:43.573875 systemd-logind[1463]: New session 4 of user core. May 9 00:28:43.584029 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:28:43.637631 sshd[1604]: pam_unix(sshd:session): session closed for user core May 9 00:28:43.652592 systemd[1]: sshd@3-10.0.0.48:22-10.0.0.1:50566.service: Deactivated successfully. May 9 00:28:43.654489 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:28:43.655827 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. May 9 00:28:43.657084 systemd[1]: Started sshd@4-10.0.0.48:22-10.0.0.1:50578.service - OpenSSH per-connection server daemon (10.0.0.1:50578). May 9 00:28:43.657750 systemd-logind[1463]: Removed session 4. May 9 00:28:43.698657 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 50578 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:28:43.700261 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:28:43.704215 systemd-logind[1463]: New session 5 of user core. May 9 00:28:43.714014 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:28:43.772230 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 00:28:43.772606 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:28:43.792136 sudo[1614]: pam_unix(sudo:session): session closed for user root May 9 00:28:43.794244 sshd[1611]: pam_unix(sshd:session): session closed for user core May 9 00:28:43.808486 systemd[1]: sshd@4-10.0.0.48:22-10.0.0.1:50578.service: Deactivated successfully. May 9 00:28:43.810094 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:28:43.811694 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. May 9 00:28:43.822137 systemd[1]: Started sshd@5-10.0.0.48:22-10.0.0.1:50590.service - OpenSSH per-connection server daemon (10.0.0.1:50590). May 9 00:28:43.823139 systemd-logind[1463]: Removed session 5. May 9 00:28:43.848646 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 50590 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:28:43.850062 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:28:43.854473 systemd-logind[1463]: New session 6 of user core. May 9 00:28:43.867013 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:28:43.922848 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 00:28:43.923211 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:28:43.927058 sudo[1623]: pam_unix(sudo:session): session closed for user root May 9 00:28:43.935475 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 9 00:28:43.935953 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:28:43.958222 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 9 00:28:43.959937 auditctl[1626]: No rules May 9 00:28:43.961339 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:28:43.961619 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 9 00:28:43.963595 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:28:43.999578 augenrules[1644]: No rules May 9 00:28:44.001471 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:28:44.002769 sudo[1622]: pam_unix(sudo:session): session closed for user root May 9 00:28:44.005019 sshd[1619]: pam_unix(sshd:session): session closed for user core May 9 00:28:44.019195 systemd[1]: sshd@5-10.0.0.48:22-10.0.0.1:50590.service: Deactivated successfully. May 9 00:28:44.021076 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:28:44.022540 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. May 9 00:28:44.023838 systemd[1]: Started sshd@6-10.0.0.48:22-10.0.0.1:50596.service - OpenSSH per-connection server daemon (10.0.0.1:50596). May 9 00:28:44.024869 systemd-logind[1463]: Removed session 6. May 9 00:28:44.054228 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 50596 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:28:44.055671 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:28:44.059772 systemd-logind[1463]: New session 7 of user core. May 9 00:28:44.066001 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:28:44.121144 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:28:44.121511 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:28:45.148117 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 00:28:45.148368 (dockerd)[1673]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 00:28:45.918839 dockerd[1673]: time="2025-05-09T00:28:45.918736802Z" level=info msg="Starting up" May 9 00:28:46.368770 dockerd[1673]: time="2025-05-09T00:28:46.368642927Z" level=info msg="Loading containers: start." May 9 00:28:46.551918 kernel: Initializing XFRM netlink socket May 9 00:28:46.726622 systemd-networkd[1412]: docker0: Link UP May 9 00:28:46.751566 dockerd[1673]: time="2025-05-09T00:28:46.751515096Z" level=info msg="Loading containers: done." May 9 00:28:46.773800 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck518160335-merged.mount: Deactivated successfully. May 9 00:28:46.776121 dockerd[1673]: time="2025-05-09T00:28:46.776058993Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 00:28:46.776230 dockerd[1673]: time="2025-05-09T00:28:46.776210527Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 9 00:28:46.776416 dockerd[1673]: time="2025-05-09T00:28:46.776387569Z" level=info msg="Daemon has completed initialization" May 9 00:28:46.824998 dockerd[1673]: time="2025-05-09T00:28:46.823980380Z" level=info msg="API listen on /run/docker.sock" May 9 00:28:46.824697 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 00:28:47.846967 containerd[1475]: time="2025-05-09T00:28:47.846918239Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 9 00:28:48.550014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1706111159.mount: Deactivated successfully. May 9 00:28:49.996029 containerd[1475]: time="2025-05-09T00:28:49.995949187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:49.996602 containerd[1475]: time="2025-05-09T00:28:49.996531609Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 9 00:28:49.999969 containerd[1475]: time="2025-05-09T00:28:49.999922159Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:50.002804 containerd[1475]: time="2025-05-09T00:28:50.002766043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:50.003969 containerd[1475]: time="2025-05-09T00:28:50.003915980Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.156949181s" May 9 00:28:50.004021 containerd[1475]: time="2025-05-09T00:28:50.003975131Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 9 00:28:50.005862 containerd[1475]: time="2025-05-09T00:28:50.005800725Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 9 00:28:50.388074 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 00:28:50.393039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:28:50.559529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:28:50.563962 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:28:50.850342 kubelet[1884]: E0509 00:28:50.850106 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:28:50.856974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:28:50.857206 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:28:52.062380 containerd[1475]: time="2025-05-09T00:28:52.062319497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:52.063691 containerd[1475]: time="2025-05-09T00:28:52.063609787Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 9 00:28:52.065268 containerd[1475]: time="2025-05-09T00:28:52.065226710Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:52.068249 containerd[1475]: time="2025-05-09T00:28:52.068191942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:52.069236 containerd[1475]: time="2025-05-09T00:28:52.069205974Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.063372918s" May 9 00:28:52.069287 containerd[1475]: time="2025-05-09T00:28:52.069238054Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 9 00:28:52.069839 containerd[1475]: time="2025-05-09T00:28:52.069800148Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 9 00:28:54.412145 containerd[1475]: time="2025-05-09T00:28:54.412052344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:54.412904 containerd[1475]: time="2025-05-09T00:28:54.412815505Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 9 00:28:54.414468 containerd[1475]: time="2025-05-09T00:28:54.414436075Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:54.417933 containerd[1475]: time="2025-05-09T00:28:54.417840271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:54.419286 containerd[1475]: time="2025-05-09T00:28:54.419247640Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.349416915s" May 9 00:28:54.419336 containerd[1475]: time="2025-05-09T00:28:54.419293126Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 9 00:28:54.420610 containerd[1475]: time="2025-05-09T00:28:54.420584157Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 9 00:28:55.577745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1178646390.mount: Deactivated successfully. May 9 00:28:55.997020 containerd[1475]: time="2025-05-09T00:28:55.996959326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:56.011436 containerd[1475]: time="2025-05-09T00:28:56.011371820Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 9 00:28:56.022392 containerd[1475]: time="2025-05-09T00:28:56.022340053Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:56.030069 containerd[1475]: time="2025-05-09T00:28:56.030001765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:56.030566 containerd[1475]: time="2025-05-09T00:28:56.030521930Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.609905783s" May 9 00:28:56.030566 containerd[1475]: time="2025-05-09T00:28:56.030557667Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 9 00:28:56.031198 containerd[1475]: time="2025-05-09T00:28:56.031162201Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 00:28:57.179201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679828168.mount: Deactivated successfully. May 9 00:28:58.857557 containerd[1475]: time="2025-05-09T00:28:58.857484113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:58.858268 containerd[1475]: time="2025-05-09T00:28:58.858205246Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 9 00:28:58.859624 containerd[1475]: time="2025-05-09T00:28:58.859502760Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:58.864523 containerd[1475]: time="2025-05-09T00:28:58.864463284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:58.865543 containerd[1475]: time="2025-05-09T00:28:58.865480582Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.834271964s" May 9 00:28:58.865543 containerd[1475]: time="2025-05-09T00:28:58.865532370Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 9 00:28:58.866157 containerd[1475]: time="2025-05-09T00:28:58.866132756Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 9 00:28:59.386913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025771552.mount: Deactivated successfully. May 9 00:28:59.393039 containerd[1475]: time="2025-05-09T00:28:59.392987194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:59.393683 containerd[1475]: time="2025-05-09T00:28:59.393616474Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 9 00:28:59.394793 containerd[1475]: time="2025-05-09T00:28:59.394761111Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:59.396922 containerd[1475]: time="2025-05-09T00:28:59.396877141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:28:59.397617 containerd[1475]: time="2025-05-09T00:28:59.397575120Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 531.41354ms" May 9 00:28:59.397664 containerd[1475]: time="2025-05-09T00:28:59.397616417Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 9 00:28:59.398169 containerd[1475]: time="2025-05-09T00:28:59.398146331Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 9 00:28:59.914780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount464567401.mount: Deactivated successfully. May 9 00:29:00.888329 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 00:29:00.899067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:29:01.050634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:29:01.056663 (kubelet)[2007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:29:01.400705 kubelet[2007]: E0509 00:29:01.400652 2007 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:29:01.405066 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:29:01.405284 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:29:02.466379 containerd[1475]: time="2025-05-09T00:29:02.466266548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:02.468862 containerd[1475]: time="2025-05-09T00:29:02.468822462Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 9 00:29:02.475231 containerd[1475]: time="2025-05-09T00:29:02.475185447Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:02.480610 containerd[1475]: time="2025-05-09T00:29:02.480579885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:02.481954 containerd[1475]: time="2025-05-09T00:29:02.481846501Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.083672098s" May 9 00:29:02.481954 containerd[1475]: time="2025-05-09T00:29:02.481952520Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 9 00:29:05.354392 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:29:05.365912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:29:05.402295 systemd[1]: Reloading requested from client PID 2057 ('systemctl') (unit session-7.scope)... May 9 00:29:05.402502 systemd[1]: Reloading... May 9 00:29:05.520046 zram_generator::config[2099]: No configuration found. May 9 00:29:05.762108 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:29:05.850482 systemd[1]: Reloading finished in 446 ms. May 9 00:29:05.904070 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 00:29:05.904188 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 00:29:05.904492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:29:05.906395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:29:06.163657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:29:06.169613 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:29:06.220808 kubelet[2143]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:29:06.220808 kubelet[2143]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:29:06.220808 kubelet[2143]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:29:06.221326 kubelet[2143]: I0509 00:29:06.220873 2143 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:29:06.473736 kubelet[2143]: I0509 00:29:06.473578 2143 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 9 00:29:06.473736 kubelet[2143]: I0509 00:29:06.473624 2143 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:29:06.473999 kubelet[2143]: I0509 00:29:06.473969 2143 server.go:929] "Client rotation is on, will bootstrap in background" May 9 00:29:06.499274 kubelet[2143]: I0509 00:29:06.499205 2143 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:29:06.500228 kubelet[2143]: E0509 00:29:06.500139 2143 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 9 00:29:06.507153 kubelet[2143]: E0509 00:29:06.507113 2143 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:29:06.507153 kubelet[2143]: I0509 00:29:06.507145 2143 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:29:06.513645 kubelet[2143]: I0509 00:29:06.513613 2143 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:29:06.514997 kubelet[2143]: I0509 00:29:06.514966 2143 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 9 00:29:06.515174 kubelet[2143]: I0509 00:29:06.515126 2143 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:29:06.515361 kubelet[2143]: I0509 00:29:06.515165 2143 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:29:06.515361 kubelet[2143]: I0509 00:29:06.515361 2143 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:29:06.515473 kubelet[2143]: I0509 00:29:06.515370 2143 container_manager_linux.go:300] "Creating device plugin manager" May 9 00:29:06.515527 kubelet[2143]: I0509 00:29:06.515509 2143 state_mem.go:36] "Initialized new in-memory state store" May 9 00:29:06.582152 kubelet[2143]: I0509 00:29:06.582079 2143 kubelet.go:408] "Attempting to sync node with API server" May 9 00:29:06.582152 kubelet[2143]: I0509 00:29:06.582142 2143 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:29:06.582363 kubelet[2143]: I0509 00:29:06.582207 2143 kubelet.go:314] "Adding apiserver pod source" May 9 00:29:06.582363 kubelet[2143]: I0509 00:29:06.582229 2143 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:29:06.583496 kubelet[2143]: W0509 00:29:06.583263 2143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused May 9 00:29:06.583496 kubelet[2143]: E0509 00:29:06.583350 2143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 9 00:29:06.583496 kubelet[2143]: W0509 00:29:06.583437 2143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused May 9 00:29:06.583496 kubelet[2143]: E0509 00:29:06.583472 2143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 9 00:29:06.589697 kubelet[2143]: I0509 00:29:06.589628 2143 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:29:06.592148 kubelet[2143]: I0509 00:29:06.592096 2143 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:29:06.592313 kubelet[2143]: W0509 00:29:06.592219 2143 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:29:06.593319 kubelet[2143]: I0509 00:29:06.593213 2143 server.go:1269] "Started kubelet" May 9 00:29:06.594258 kubelet[2143]: I0509 00:29:06.593627 2143 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:29:06.594507 kubelet[2143]: I0509 00:29:06.594485 2143 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:29:06.594602 kubelet[2143]: I0509 00:29:06.594574 2143 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:29:06.595054 kubelet[2143]: I0509 00:29:06.595030 2143 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:29:06.596069 kubelet[2143]: I0509 00:29:06.595913 2143 server.go:460] "Adding debug handlers to kubelet server" May 9 00:29:06.597423 kubelet[2143]: I0509 00:29:06.597059 2143 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:29:06.598958 kubelet[2143]: I0509 00:29:06.598787 2143 volume_manager.go:289] "Starting Kubelet Volume Manager" May 9 00:29:06.598958 kubelet[2143]: I0509 00:29:06.598944 2143 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 9 00:29:06.599056 kubelet[2143]: I0509 00:29:06.599006 2143 reconciler.go:26] "Reconciler: start to sync state" May 9 00:29:06.599520 kubelet[2143]: W0509 00:29:06.599434 2143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused May 9 00:29:06.599520 kubelet[2143]: E0509 00:29:06.599502 2143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 9 00:29:06.599673 kubelet[2143]: E0509 00:29:06.599605 2143 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:29:06.599778 kubelet[2143]: E0509 00:29:06.599724 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms" May 9 00:29:06.600851 kubelet[2143]: I0509 00:29:06.599990 2143 factory.go:221] Registration of the systemd container factory successfully May 9 00:29:06.600851 kubelet[2143]: E0509 00:29:06.600014 2143 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:29:06.600851 kubelet[2143]: I0509 00:29:06.600090 2143 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:29:06.601270 kubelet[2143]: I0509 00:29:06.601241 2143 factory.go:221] Registration of the containerd container factory successfully May 9 00:29:06.603241 kubelet[2143]: E0509 00:29:06.600610 2143 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db45e10e7de7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:29:06.593177212 +0000 UTC m=+0.409419475,LastTimestamp:2025-05-09 00:29:06.593177212 +0000 UTC m=+0.409419475,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:29:06.620078 kubelet[2143]: I0509 00:29:06.620046 2143 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:29:06.621546 kubelet[2143]: I0509 00:29:06.621462 2143 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:29:06.621546 kubelet[2143]: I0509 00:29:06.621486 2143 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:29:06.621546 kubelet[2143]: I0509 00:29:06.621508 2143 state_mem.go:36] "Initialized new in-memory state store" May 9 00:29:06.623049 kubelet[2143]: I0509 00:29:06.623009 2143 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:29:06.623049 kubelet[2143]: I0509 00:29:06.623040 2143 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:29:06.623570 kubelet[2143]: I0509 00:29:06.623062 2143 kubelet.go:2321] "Starting kubelet main sync loop" May 9 00:29:06.623570 kubelet[2143]: E0509 00:29:06.623105 2143 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:29:06.624041 kubelet[2143]: W0509 00:29:06.624004 2143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused May 9 00:29:06.624041 kubelet[2143]: E0509 00:29:06.624042 2143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 9 00:29:06.699797 kubelet[2143]: E0509 00:29:06.699749 2143 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:29:06.724163 kubelet[2143]: E0509 00:29:06.724041 2143 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 00:29:06.800417 kubelet[2143]: E0509 00:29:06.800382 2143 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:29:06.800716 kubelet[2143]: E0509 00:29:06.800689 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms" May 9 00:29:06.901052 kubelet[2143]: E0509 00:29:06.900985 2143 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:29:06.925161 kubelet[2143]: E0509 00:29:06.925125 2143 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 00:29:07.001814 kubelet[2143]: E0509 00:29:07.001643 2143 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:29:07.102449 kubelet[2143]: E0509 00:29:07.102415 2143 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:29:07.202209 kubelet[2143]: E0509 00:29:07.202163 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms" May 9 00:29:07.203218 kubelet[2143]: E0509 00:29:07.203193 2143 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:29:07.303737 kubelet[2143]: E0509 00:29:07.303609 2143 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:29:07.325876 kubelet[2143]: E0509 00:29:07.325818 2143 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 00:29:07.404278 kubelet[2143]: E0509 00:29:07.404225 2143 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:29:07.504863 kubelet[2143]: E0509 00:29:07.504786 2143 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:29:07.507668 kubelet[2143]: I0509 00:29:07.507645 2143 policy_none.go:49] "None policy: Start" May 9 00:29:07.508644 kubelet[2143]: I0509 00:29:07.508602 2143 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:29:07.508707 kubelet[2143]: I0509 00:29:07.508662 2143 state_mem.go:35] "Initializing new in-memory state store" May 9 00:29:07.523063 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 00:29:07.524063 kubelet[2143]: W0509 00:29:07.524005 2143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused May 9 00:29:07.524116 kubelet[2143]: E0509 00:29:07.524071 2143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 9 00:29:07.535957 kubelet[2143]: W0509 00:29:07.535900 2143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused May 9 00:29:07.536041 kubelet[2143]: E0509 00:29:07.535960 2143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 9 00:29:07.536323 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 00:29:07.539731 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 00:29:07.552007 kubelet[2143]: I0509 00:29:07.551972 2143 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:29:07.552276 kubelet[2143]: I0509 00:29:07.552261 2143 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:29:07.552344 kubelet[2143]: I0509 00:29:07.552280 2143 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:29:07.552822 kubelet[2143]: I0509 00:29:07.552620 2143 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:29:07.554248 kubelet[2143]: E0509 00:29:07.554057 2143 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 9 00:29:07.619999 kubelet[2143]: W0509 00:29:07.619919 2143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused May 9 00:29:07.620135 kubelet[2143]: E0509 00:29:07.620003 2143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 9 00:29:07.653643 kubelet[2143]: I0509 00:29:07.653603 2143 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:29:07.654032 kubelet[2143]: E0509 00:29:07.653994 2143 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" May 9 00:29:07.711170 kubelet[2143]: W0509 00:29:07.711090 2143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused May 9 00:29:07.711170 kubelet[2143]: E0509 00:29:07.711167 2143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 9 00:29:07.856417 kubelet[2143]: I0509 00:29:07.856272 2143 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:29:07.856689 kubelet[2143]: E0509 00:29:07.856658 2143 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" May 9 00:29:08.003459 kubelet[2143]: E0509 00:29:08.003391 2143 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="1.6s" May 9 00:29:08.136538 systemd[1]: Created slice kubepods-burstable-podad534243804a41c8b3c9d277b9839cb8.slice - libcontainer container kubepods-burstable-podad534243804a41c8b3c9d277b9839cb8.slice. May 9 00:29:08.157359 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 9 00:29:08.170823 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 9 00:29:08.209137 kubelet[2143]: I0509 00:29:08.209074 2143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad534243804a41c8b3c9d277b9839cb8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ad534243804a41c8b3c9d277b9839cb8\") " pod="kube-system/kube-apiserver-localhost" May 9 00:29:08.209137 kubelet[2143]: I0509 00:29:08.209113 2143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad534243804a41c8b3c9d277b9839cb8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ad534243804a41c8b3c9d277b9839cb8\") " pod="kube-system/kube-apiserver-localhost" May 9 00:29:08.209137 kubelet[2143]: I0509 00:29:08.209137 2143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:29:08.209137 kubelet[2143]: I0509 00:29:08.209153 2143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:29:08.209425 kubelet[2143]: I0509 00:29:08.209185 2143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad534243804a41c8b3c9d277b9839cb8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ad534243804a41c8b3c9d277b9839cb8\") " pod="kube-system/kube-apiserver-localhost" May 9 00:29:08.209425 kubelet[2143]: I0509 00:29:08.209238 2143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:29:08.209425 kubelet[2143]: I0509 00:29:08.209310 2143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:29:08.209425 kubelet[2143]: I0509 00:29:08.209368 2143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:29:08.209425 kubelet[2143]: I0509 00:29:08.209398 2143 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 9 00:29:08.258583 kubelet[2143]: I0509 00:29:08.258506 2143 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:29:08.258874 kubelet[2143]: E0509 00:29:08.258835 2143 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" May 9 00:29:08.455229 kubelet[2143]: E0509 00:29:08.455174 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:08.456031 containerd[1475]: time="2025-05-09T00:29:08.455989045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ad534243804a41c8b3c9d277b9839cb8,Namespace:kube-system,Attempt:0,}" May 9 00:29:08.469284 kubelet[2143]: E0509 00:29:08.469243 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:08.469718 containerd[1475]: time="2025-05-09T00:29:08.469689954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 9 00:29:08.474053 kubelet[2143]: E0509 00:29:08.474006 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:08.474485 containerd[1475]: time="2025-05-09T00:29:08.474449191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 9 00:29:08.696793 kubelet[2143]: E0509 00:29:08.696733 2143 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 9 00:29:08.990002 kubelet[2143]: E0509 00:29:08.989849 2143 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db45e10e7de7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:29:06.593177212 +0000 UTC m=+0.409419475,LastTimestamp:2025-05-09 00:29:06.593177212 +0000 UTC m=+0.409419475,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:29:09.030950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4034247034.mount: Deactivated successfully. May 9 00:29:09.035900 containerd[1475]: time="2025-05-09T00:29:09.035799654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:29:09.037736 containerd[1475]: time="2025-05-09T00:29:09.037666997Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:29:09.038948 containerd[1475]: time="2025-05-09T00:29:09.038869833Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:29:09.039906 containerd[1475]: time="2025-05-09T00:29:09.039848378Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:29:09.041012 containerd[1475]: time="2025-05-09T00:29:09.040954563Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:29:09.041975 containerd[1475]: time="2025-05-09T00:29:09.041927518Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:29:09.043031 containerd[1475]: time="2025-05-09T00:29:09.042994620Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 9 00:29:09.044584 containerd[1475]: time="2025-05-09T00:29:09.044535079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:29:09.046500 containerd[1475]: time="2025-05-09T00:29:09.046466362Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 590.390324ms" May 9 00:29:09.047150 containerd[1475]: time="2025-05-09T00:29:09.047122382Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 577.380251ms" May 9 00:29:09.049860 containerd[1475]: time="2025-05-09T00:29:09.049826695Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 575.297644ms" May 9 00:29:09.060321 kubelet[2143]: I0509 00:29:09.060286 2143 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:29:09.060777 kubelet[2143]: E0509 00:29:09.060724 2143 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" May 9 00:29:09.287651 containerd[1475]: time="2025-05-09T00:29:09.287206852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:29:09.287651 containerd[1475]: time="2025-05-09T00:29:09.287266544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:29:09.287651 containerd[1475]: time="2025-05-09T00:29:09.287287333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:09.287651 containerd[1475]: time="2025-05-09T00:29:09.287398982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:09.310156 containerd[1475]: time="2025-05-09T00:29:09.309788609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:29:09.310156 containerd[1475]: time="2025-05-09T00:29:09.310043888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:29:09.310156 containerd[1475]: time="2025-05-09T00:29:09.310103149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:09.310587 containerd[1475]: time="2025-05-09T00:29:09.310253070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:09.314776 containerd[1475]: time="2025-05-09T00:29:09.314444382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:29:09.314776 containerd[1475]: time="2025-05-09T00:29:09.314514834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:29:09.314776 containerd[1475]: time="2025-05-09T00:29:09.314533679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:09.314776 containerd[1475]: time="2025-05-09T00:29:09.314623738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:09.333088 systemd[1]: Started cri-containerd-17cd3c933e3f3961877d827252de626737a2c7cd4fbf3063f0cbb175b659ca45.scope - libcontainer container 17cd3c933e3f3961877d827252de626737a2c7cd4fbf3063f0cbb175b659ca45. May 9 00:29:09.339538 systemd[1]: Started cri-containerd-d9b16eb0d678a3746117304e1efd120a1a102994e4b98b6432b0d71479f7bd29.scope - libcontainer container d9b16eb0d678a3746117304e1efd120a1a102994e4b98b6432b0d71479f7bd29. May 9 00:29:09.348445 systemd[1]: Started cri-containerd-72ae5f6ef91382a6bee6d660363a53e116de05d4895e8b8f54fd1572803a9379.scope - libcontainer container 72ae5f6ef91382a6bee6d660363a53e116de05d4895e8b8f54fd1572803a9379. May 9 00:29:09.407490 containerd[1475]: time="2025-05-09T00:29:09.407330971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ad534243804a41c8b3c9d277b9839cb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9b16eb0d678a3746117304e1efd120a1a102994e4b98b6432b0d71479f7bd29\"" May 9 00:29:09.409092 kubelet[2143]: E0509 00:29:09.409045 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:09.410445 containerd[1475]: time="2025-05-09T00:29:09.410228616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"17cd3c933e3f3961877d827252de626737a2c7cd4fbf3063f0cbb175b659ca45\"" May 9 00:29:09.411199 kubelet[2143]: E0509 00:29:09.411172 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:09.413181 containerd[1475]: time="2025-05-09T00:29:09.413144184Z" level=info msg="CreateContainer within sandbox \"d9b16eb0d678a3746117304e1efd120a1a102994e4b98b6432b0d71479f7bd29\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 00:29:09.414903 containerd[1475]: time="2025-05-09T00:29:09.414838212Z" level=info msg="CreateContainer within sandbox \"17cd3c933e3f3961877d827252de626737a2c7cd4fbf3063f0cbb175b659ca45\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 00:29:09.422237 containerd[1475]: time="2025-05-09T00:29:09.422194671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"72ae5f6ef91382a6bee6d660363a53e116de05d4895e8b8f54fd1572803a9379\"" May 9 00:29:09.422868 kubelet[2143]: E0509 00:29:09.422843 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:09.424257 containerd[1475]: time="2025-05-09T00:29:09.424223376Z" level=info msg="CreateContainer within sandbox \"72ae5f6ef91382a6bee6d660363a53e116de05d4895e8b8f54fd1572803a9379\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 00:29:09.440965 containerd[1475]: time="2025-05-09T00:29:09.440919513Z" level=info msg="CreateContainer within sandbox \"d9b16eb0d678a3746117304e1efd120a1a102994e4b98b6432b0d71479f7bd29\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"40486b79bfa1b4be17ce7aa7e5585a1126374606ebb9d580d4b354f33675ace4\"" May 9 00:29:09.441499 containerd[1475]: time="2025-05-09T00:29:09.441473482Z" level=info msg="StartContainer for \"40486b79bfa1b4be17ce7aa7e5585a1126374606ebb9d580d4b354f33675ace4\"" May 9 00:29:09.447312 containerd[1475]: time="2025-05-09T00:29:09.447283751Z" level=info msg="CreateContainer within sandbox \"17cd3c933e3f3961877d827252de626737a2c7cd4fbf3063f0cbb175b659ca45\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"27ead6ee14c1c2a7bc1ccece9813f3dae6357b134313e2c260aea7b6135d4772\"" May 9 00:29:09.447649 containerd[1475]: time="2025-05-09T00:29:09.447626784Z" level=info msg="StartContainer for \"27ead6ee14c1c2a7bc1ccece9813f3dae6357b134313e2c260aea7b6135d4772\"" May 9 00:29:09.451466 containerd[1475]: time="2025-05-09T00:29:09.451433094Z" level=info msg="CreateContainer within sandbox \"72ae5f6ef91382a6bee6d660363a53e116de05d4895e8b8f54fd1572803a9379\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cc1609cc393794fb5a9f593d875fe20f97dec6831fd69b310238c26577b0c8a9\"" May 9 00:29:09.452945 containerd[1475]: time="2025-05-09T00:29:09.451944353Z" level=info msg="StartContainer for \"cc1609cc393794fb5a9f593d875fe20f97dec6831fd69b310238c26577b0c8a9\"" May 9 00:29:09.471072 systemd[1]: Started cri-containerd-40486b79bfa1b4be17ce7aa7e5585a1126374606ebb9d580d4b354f33675ace4.scope - libcontainer container 40486b79bfa1b4be17ce7aa7e5585a1126374606ebb9d580d4b354f33675ace4. May 9 00:29:09.475089 systemd[1]: Started cri-containerd-27ead6ee14c1c2a7bc1ccece9813f3dae6357b134313e2c260aea7b6135d4772.scope - libcontainer container 27ead6ee14c1c2a7bc1ccece9813f3dae6357b134313e2c260aea7b6135d4772. May 9 00:29:09.479183 systemd[1]: Started cri-containerd-cc1609cc393794fb5a9f593d875fe20f97dec6831fd69b310238c26577b0c8a9.scope - libcontainer container cc1609cc393794fb5a9f593d875fe20f97dec6831fd69b310238c26577b0c8a9. May 9 00:29:09.507146 kubelet[2143]: W0509 00:29:09.507074 2143 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused May 9 00:29:09.507146 kubelet[2143]: E0509 00:29:09.507116 2143 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" May 9 00:29:09.524934 containerd[1475]: time="2025-05-09T00:29:09.524616314Z" level=info msg="StartContainer for \"27ead6ee14c1c2a7bc1ccece9813f3dae6357b134313e2c260aea7b6135d4772\" returns successfully" May 9 00:29:09.525512 containerd[1475]: time="2025-05-09T00:29:09.524866103Z" level=info msg="StartContainer for \"40486b79bfa1b4be17ce7aa7e5585a1126374606ebb9d580d4b354f33675ace4\" returns successfully" May 9 00:29:09.535357 containerd[1475]: time="2025-05-09T00:29:09.535315212Z" level=info msg="StartContainer for \"cc1609cc393794fb5a9f593d875fe20f97dec6831fd69b310238c26577b0c8a9\" returns successfully" May 9 00:29:09.632142 kubelet[2143]: E0509 00:29:09.630772 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:09.634562 kubelet[2143]: E0509 00:29:09.634232 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:09.636389 kubelet[2143]: E0509 00:29:09.636337 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:10.642220 kubelet[2143]: E0509 00:29:10.642176 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:10.662073 kubelet[2143]: I0509 00:29:10.661975 2143 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:29:11.387639 kubelet[2143]: E0509 00:29:11.387583 2143 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 9 00:29:11.554212 kubelet[2143]: I0509 00:29:11.554132 2143 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 9 00:29:11.554212 kubelet[2143]: E0509 00:29:11.554210 2143 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 9 00:29:11.584365 kubelet[2143]: I0509 00:29:11.584294 2143 apiserver.go:52] "Watching apiserver" May 9 00:29:11.600079 kubelet[2143]: I0509 00:29:11.600027 2143 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 9 00:29:13.613192 systemd[1]: Reloading requested from client PID 2422 ('systemctl') (unit session-7.scope)... May 9 00:29:13.613208 systemd[1]: Reloading... May 9 00:29:13.698929 zram_generator::config[2464]: No configuration found. May 9 00:29:13.912784 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:29:13.957686 kubelet[2143]: E0509 00:29:13.957652 2143 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:14.007056 systemd[1]: Reloading finished in 393 ms. May 9 00:29:14.052469 kubelet[2143]: I0509 00:29:14.052416 2143 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:29:14.052534 kubelet[2143]: E0509 00:29:14.052395 2143 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.183db45e10e7de7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:29:06.593177212 +0000 UTC m=+0.409419475,LastTimestamp:2025-05-09 00:29:06.593177212 +0000 UTC m=+0.409419475,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:29:14.052684 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:29:14.082359 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:29:14.082688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:29:14.090265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:29:14.231248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:29:14.237236 (kubelet)[2506]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:29:14.280016 kubelet[2506]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:29:14.280016 kubelet[2506]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:29:14.280016 kubelet[2506]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:29:14.280518 kubelet[2506]: I0509 00:29:14.280066 2506 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:29:14.286955 kubelet[2506]: I0509 00:29:14.286907 2506 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 9 00:29:14.286955 kubelet[2506]: I0509 00:29:14.286939 2506 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:29:14.287166 kubelet[2506]: I0509 00:29:14.287140 2506 server.go:929] "Client rotation is on, will bootstrap in background" May 9 00:29:14.291178 kubelet[2506]: I0509 00:29:14.290597 2506 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 00:29:14.294281 kubelet[2506]: I0509 00:29:14.294245 2506 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:29:14.297660 kubelet[2506]: E0509 00:29:14.297621 2506 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:29:14.297660 kubelet[2506]: I0509 00:29:14.297652 2506 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:29:14.302553 kubelet[2506]: I0509 00:29:14.302523 2506 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:29:14.302709 kubelet[2506]: I0509 00:29:14.302686 2506 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 9 00:29:14.302901 kubelet[2506]: I0509 00:29:14.302843 2506 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:29:14.303084 kubelet[2506]: I0509 00:29:14.302900 2506 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:29:14.303084 kubelet[2506]: I0509 00:29:14.303081 2506 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:29:14.303198 kubelet[2506]: I0509 00:29:14.303090 2506 container_manager_linux.go:300] "Creating device plugin manager" May 9 00:29:14.303198 kubelet[2506]: I0509 00:29:14.303126 2506 state_mem.go:36] "Initialized new in-memory state store" May 9 00:29:14.303272 kubelet[2506]: I0509 00:29:14.303257 2506 kubelet.go:408] "Attempting to sync node with API server" May 9 00:29:14.303306 kubelet[2506]: I0509 00:29:14.303272 2506 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:29:14.303306 kubelet[2506]: I0509 00:29:14.303305 2506 kubelet.go:314] "Adding apiserver pod source" May 9 00:29:14.303348 kubelet[2506]: I0509 00:29:14.303322 2506 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:29:14.304146 kubelet[2506]: I0509 00:29:14.303854 2506 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:29:14.304463 kubelet[2506]: I0509 00:29:14.304423 2506 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:29:14.305646 kubelet[2506]: I0509 00:29:14.305602 2506 server.go:1269] "Started kubelet" May 9 00:29:14.310077 kubelet[2506]: I0509 00:29:14.309951 2506 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:29:14.310347 kubelet[2506]: I0509 00:29:14.310323 2506 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:29:14.310418 kubelet[2506]: I0509 00:29:14.310383 2506 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:29:14.312080 kubelet[2506]: I0509 00:29:14.312053 2506 server.go:460] "Adding debug handlers to kubelet server" May 9 00:29:14.312432 kubelet[2506]: I0509 00:29:14.312398 2506 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:29:14.314158 kubelet[2506]: I0509 00:29:14.314007 2506 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:29:14.315167 kubelet[2506]: I0509 00:29:14.315141 2506 volume_manager.go:289] "Starting Kubelet Volume Manager" May 9 00:29:14.315354 kubelet[2506]: E0509 00:29:14.315331 2506 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:29:14.315541 kubelet[2506]: I0509 00:29:14.315516 2506 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 9 00:29:14.315698 kubelet[2506]: I0509 00:29:14.315678 2506 reconciler.go:26] "Reconciler: start to sync state" May 9 00:29:14.321544 kubelet[2506]: E0509 00:29:14.321499 2506 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:29:14.321705 kubelet[2506]: I0509 00:29:14.321673 2506 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:29:14.325576 kubelet[2506]: I0509 00:29:14.324778 2506 factory.go:221] Registration of the containerd container factory successfully May 9 00:29:14.325576 kubelet[2506]: I0509 00:29:14.324803 2506 factory.go:221] Registration of the systemd container factory successfully May 9 00:29:14.328989 kubelet[2506]: I0509 00:29:14.328943 2506 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:29:14.330513 kubelet[2506]: I0509 00:29:14.330479 2506 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:29:14.330513 kubelet[2506]: I0509 00:29:14.330514 2506 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:29:14.330583 kubelet[2506]: I0509 00:29:14.330535 2506 kubelet.go:2321] "Starting kubelet main sync loop" May 9 00:29:14.330621 kubelet[2506]: E0509 00:29:14.330582 2506 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:29:14.365186 kubelet[2506]: I0509 00:29:14.365151 2506 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:29:14.365186 kubelet[2506]: I0509 00:29:14.365172 2506 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:29:14.365186 kubelet[2506]: I0509 00:29:14.365192 2506 state_mem.go:36] "Initialized new in-memory state store" May 9 00:29:14.365348 kubelet[2506]: I0509 00:29:14.365332 2506 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 00:29:14.365395 kubelet[2506]: I0509 00:29:14.365347 2506 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 00:29:14.365395 kubelet[2506]: I0509 00:29:14.365366 2506 policy_none.go:49] "None policy: Start" May 9 00:29:14.366077 kubelet[2506]: I0509 00:29:14.366046 2506 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:29:14.366077 kubelet[2506]: I0509 00:29:14.366076 2506 state_mem.go:35] "Initializing new in-memory state store" May 9 00:29:14.366297 kubelet[2506]: I0509 00:29:14.366282 2506 state_mem.go:75] "Updated machine memory state" May 9 00:29:14.370694 kubelet[2506]: I0509 00:29:14.370631 2506 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:29:14.370923 kubelet[2506]: I0509 00:29:14.370879 2506 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:29:14.370966 kubelet[2506]: I0509 00:29:14.370918 2506 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:29:14.371134 kubelet[2506]: I0509 00:29:14.371109 2506 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:29:14.437123 kubelet[2506]: E0509 00:29:14.437076 2506 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 9 00:29:14.476072 kubelet[2506]: I0509 00:29:14.476034 2506 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 9 00:29:14.482069 kubelet[2506]: I0509 00:29:14.481962 2506 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 9 00:29:14.482069 kubelet[2506]: I0509 00:29:14.482031 2506 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 9 00:29:14.616448 kubelet[2506]: I0509 00:29:14.616394 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad534243804a41c8b3c9d277b9839cb8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ad534243804a41c8b3c9d277b9839cb8\") " pod="kube-system/kube-apiserver-localhost" May 9 00:29:14.616448 kubelet[2506]: I0509 00:29:14.616435 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:29:14.616448 kubelet[2506]: I0509 00:29:14.616455 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:29:14.616660 kubelet[2506]: I0509 00:29:14.616473 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 9 00:29:14.616660 kubelet[2506]: I0509 00:29:14.616551 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad534243804a41c8b3c9d277b9839cb8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ad534243804a41c8b3c9d277b9839cb8\") " pod="kube-system/kube-apiserver-localhost" May 9 00:29:14.616660 kubelet[2506]: I0509 00:29:14.616590 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad534243804a41c8b3c9d277b9839cb8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ad534243804a41c8b3c9d277b9839cb8\") " pod="kube-system/kube-apiserver-localhost" May 9 00:29:14.616660 kubelet[2506]: I0509 00:29:14.616610 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:29:14.616660 kubelet[2506]: I0509 00:29:14.616628 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:29:14.616775 kubelet[2506]: I0509 00:29:14.616645 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:29:14.736793 kubelet[2506]: E0509 00:29:14.736639 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:14.736793 kubelet[2506]: E0509 00:29:14.736753 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:14.738092 kubelet[2506]: E0509 00:29:14.738064 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:15.304391 kubelet[2506]: I0509 00:29:15.304344 2506 apiserver.go:52] "Watching apiserver" May 9 00:29:15.315795 kubelet[2506]: I0509 00:29:15.315772 2506 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 9 00:29:15.489690 kubelet[2506]: E0509 00:29:15.344472 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:15.489690 kubelet[2506]: E0509 00:29:15.344578 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:15.497379 kubelet[2506]: E0509 00:29:15.497335 2506 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 9 00:29:15.497558 kubelet[2506]: E0509 00:29:15.497537 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:15.549637 kubelet[2506]: I0509 00:29:15.549347 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.549323322 podStartE2EDuration="1.549323322s" podCreationTimestamp="2025-05-09 00:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:29:15.5342465 +0000 UTC m=+1.289909766" watchObservedRunningTime="2025-05-09 00:29:15.549323322 +0000 UTC m=+1.304986578" May 9 00:29:15.556400 kubelet[2506]: I0509 00:29:15.556262 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.55624546 podStartE2EDuration="2.55624546s" podCreationTimestamp="2025-05-09 00:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:29:15.549308373 +0000 UTC m=+1.304971639" watchObservedRunningTime="2025-05-09 00:29:15.55624546 +0000 UTC m=+1.311908726" May 9 00:29:15.563016 kubelet[2506]: I0509 00:29:15.562952 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5629374600000001 podStartE2EDuration="1.56293746s" podCreationTimestamp="2025-05-09 00:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:29:15.556521458 +0000 UTC m=+1.312184724" watchObservedRunningTime="2025-05-09 00:29:15.56293746 +0000 UTC m=+1.318600726" May 9 00:29:16.345618 kubelet[2506]: E0509 00:29:16.345576 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:18.004712 kubelet[2506]: E0509 00:29:18.004677 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:18.694188 kubelet[2506]: E0509 00:29:18.694140 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:19.332760 sudo[1655]: pam_unix(sudo:session): session closed for user root May 9 00:29:19.334651 sshd[1652]: pam_unix(sshd:session): session closed for user core May 9 00:29:19.338463 systemd[1]: sshd@6-10.0.0.48:22-10.0.0.1:50596.service: Deactivated successfully. May 9 00:29:19.340338 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:29:19.340536 systemd[1]: session-7.scope: Consumed 6.249s CPU time, 159.7M memory peak, 0B memory swap peak. May 9 00:29:19.341036 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. May 9 00:29:19.342004 systemd-logind[1463]: Removed session 7. May 9 00:29:20.428835 kubelet[2506]: I0509 00:29:20.428796 2506 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 00:29:20.429291 containerd[1475]: time="2025-05-09T00:29:20.429125602Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:29:20.429553 kubelet[2506]: I0509 00:29:20.429296 2506 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 00:29:21.231106 systemd[1]: Created slice kubepods-besteffort-pod4c0e1977_72a8_46d6_a231_2e1d653c7cae.slice - libcontainer container kubepods-besteffort-pod4c0e1977_72a8_46d6_a231_2e1d653c7cae.slice. May 9 00:29:21.260516 kubelet[2506]: I0509 00:29:21.260475 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c0e1977-72a8-46d6-a231-2e1d653c7cae-xtables-lock\") pod \"kube-proxy-7n4h5\" (UID: \"4c0e1977-72a8-46d6-a231-2e1d653c7cae\") " pod="kube-system/kube-proxy-7n4h5" May 9 00:29:21.260516 kubelet[2506]: I0509 00:29:21.260505 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c0e1977-72a8-46d6-a231-2e1d653c7cae-lib-modules\") pod \"kube-proxy-7n4h5\" (UID: \"4c0e1977-72a8-46d6-a231-2e1d653c7cae\") " pod="kube-system/kube-proxy-7n4h5" May 9 00:29:21.260516 kubelet[2506]: I0509 00:29:21.260528 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98vqf\" (UniqueName: \"kubernetes.io/projected/4c0e1977-72a8-46d6-a231-2e1d653c7cae-kube-api-access-98vqf\") pod \"kube-proxy-7n4h5\" (UID: \"4c0e1977-72a8-46d6-a231-2e1d653c7cae\") " pod="kube-system/kube-proxy-7n4h5" May 9 00:29:21.260714 kubelet[2506]: I0509 00:29:21.260551 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c0e1977-72a8-46d6-a231-2e1d653c7cae-kube-proxy\") pod \"kube-proxy-7n4h5\" (UID: \"4c0e1977-72a8-46d6-a231-2e1d653c7cae\") " pod="kube-system/kube-proxy-7n4h5" May 9 00:29:21.342306 systemd[1]: Created slice kubepods-besteffort-pode7c85418_f496_4eb3_8277_721629b9a8f8.slice - libcontainer container kubepods-besteffort-pode7c85418_f496_4eb3_8277_721629b9a8f8.slice. May 9 00:29:21.361406 kubelet[2506]: I0509 00:29:21.361365 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6qzm\" (UniqueName: \"kubernetes.io/projected/e7c85418-f496-4eb3-8277-721629b9a8f8-kube-api-access-z6qzm\") pod \"tigera-operator-6f6897fdc5-bkc7r\" (UID: \"e7c85418-f496-4eb3-8277-721629b9a8f8\") " pod="tigera-operator/tigera-operator-6f6897fdc5-bkc7r" May 9 00:29:21.361501 kubelet[2506]: I0509 00:29:21.361420 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e7c85418-f496-4eb3-8277-721629b9a8f8-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-bkc7r\" (UID: \"e7c85418-f496-4eb3-8277-721629b9a8f8\") " pod="tigera-operator/tigera-operator-6f6897fdc5-bkc7r" May 9 00:29:21.543341 kubelet[2506]: E0509 00:29:21.543156 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:21.543903 containerd[1475]: time="2025-05-09T00:29:21.543823924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7n4h5,Uid:4c0e1977-72a8-46d6-a231-2e1d653c7cae,Namespace:kube-system,Attempt:0,}" May 9 00:29:21.569591 containerd[1475]: time="2025-05-09T00:29:21.569485409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:29:21.569591 containerd[1475]: time="2025-05-09T00:29:21.569558999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:29:21.569591 containerd[1475]: time="2025-05-09T00:29:21.569573316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:21.569735 containerd[1475]: time="2025-05-09T00:29:21.569687724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:21.594020 systemd[1]: Started cri-containerd-baffc19f1e7bf9abefe9fcb694144461fb7491805cd9e1783a172f86a61545aa.scope - libcontainer container baffc19f1e7bf9abefe9fcb694144461fb7491805cd9e1783a172f86a61545aa. May 9 00:29:21.615095 containerd[1475]: time="2025-05-09T00:29:21.615036541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7n4h5,Uid:4c0e1977-72a8-46d6-a231-2e1d653c7cae,Namespace:kube-system,Attempt:0,} returns sandbox id \"baffc19f1e7bf9abefe9fcb694144461fb7491805cd9e1783a172f86a61545aa\"" May 9 00:29:21.615798 kubelet[2506]: E0509 00:29:21.615775 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:21.618945 containerd[1475]: time="2025-05-09T00:29:21.618874207Z" level=info msg="CreateContainer within sandbox \"baffc19f1e7bf9abefe9fcb694144461fb7491805cd9e1783a172f86a61545aa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:29:21.635943 containerd[1475]: time="2025-05-09T00:29:21.635878204Z" level=info msg="CreateContainer within sandbox \"baffc19f1e7bf9abefe9fcb694144461fb7491805cd9e1783a172f86a61545aa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bf1a969a17a15646e5c8e1e4d442adf3e5a599165f9c31ae1a3e69a3f70cc039\"" May 9 00:29:21.636478 containerd[1475]: time="2025-05-09T00:29:21.636453067Z" level=info msg="StartContainer for \"bf1a969a17a15646e5c8e1e4d442adf3e5a599165f9c31ae1a3e69a3f70cc039\"" May 9 00:29:21.646663 containerd[1475]: time="2025-05-09T00:29:21.646217897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-bkc7r,Uid:e7c85418-f496-4eb3-8277-721629b9a8f8,Namespace:tigera-operator,Attempt:0,}" May 9 00:29:21.665045 systemd[1]: Started cri-containerd-bf1a969a17a15646e5c8e1e4d442adf3e5a599165f9c31ae1a3e69a3f70cc039.scope - libcontainer container bf1a969a17a15646e5c8e1e4d442adf3e5a599165f9c31ae1a3e69a3f70cc039. May 9 00:29:21.674378 containerd[1475]: time="2025-05-09T00:29:21.674267707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:29:21.674378 containerd[1475]: time="2025-05-09T00:29:21.674333383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:29:21.674775 containerd[1475]: time="2025-05-09T00:29:21.674571926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:21.675464 containerd[1475]: time="2025-05-09T00:29:21.675401631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:21.694044 systemd[1]: Started cri-containerd-613bf95c4d942354e5e861e45e48c220f5c0bad2f34d038c0a30ceca333597ea.scope - libcontainer container 613bf95c4d942354e5e861e45e48c220f5c0bad2f34d038c0a30ceca333597ea. May 9 00:29:21.698338 containerd[1475]: time="2025-05-09T00:29:21.698232882Z" level=info msg="StartContainer for \"bf1a969a17a15646e5c8e1e4d442adf3e5a599165f9c31ae1a3e69a3f70cc039\" returns successfully" May 9 00:29:21.735061 containerd[1475]: time="2025-05-09T00:29:21.734965069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-bkc7r,Uid:e7c85418-f496-4eb3-8277-721629b9a8f8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"613bf95c4d942354e5e861e45e48c220f5c0bad2f34d038c0a30ceca333597ea\"" May 9 00:29:21.737955 containerd[1475]: time="2025-05-09T00:29:21.737694812Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 9 00:29:22.354833 kubelet[2506]: E0509 00:29:22.354778 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:22.365089 kubelet[2506]: I0509 00:29:22.364997 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7n4h5" podStartSLOduration=1.364976437 podStartE2EDuration="1.364976437s" podCreationTimestamp="2025-05-09 00:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:29:22.364506966 +0000 UTC m=+8.120170242" watchObservedRunningTime="2025-05-09 00:29:22.364976437 +0000 UTC m=+8.120639703" May 9 00:29:22.484002 update_engine[1464]: I20250509 00:29:22.483932 1464 update_attempter.cc:509] Updating boot flags... May 9 00:29:22.507925 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2848) May 9 00:29:22.545950 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2849) May 9 00:29:22.664042 kubelet[2506]: E0509 00:29:22.664014 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:23.021921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount654256504.mount: Deactivated successfully. May 9 00:29:23.356609 kubelet[2506]: E0509 00:29:23.356227 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:23.676700 containerd[1475]: time="2025-05-09T00:29:23.676631124Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:23.677483 containerd[1475]: time="2025-05-09T00:29:23.677422996Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 9 00:29:23.678780 containerd[1475]: time="2025-05-09T00:29:23.678752015Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:23.681232 containerd[1475]: time="2025-05-09T00:29:23.681187914Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:23.681872 containerd[1475]: time="2025-05-09T00:29:23.681833478Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 1.944112746s" May 9 00:29:23.681929 containerd[1475]: time="2025-05-09T00:29:23.681872852Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 9 00:29:23.683923 containerd[1475]: time="2025-05-09T00:29:23.683876081Z" level=info msg="CreateContainer within sandbox \"613bf95c4d942354e5e861e45e48c220f5c0bad2f34d038c0a30ceca333597ea\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 9 00:29:23.696822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031598029.mount: Deactivated successfully. May 9 00:29:23.698368 containerd[1475]: time="2025-05-09T00:29:23.698322431Z" level=info msg="CreateContainer within sandbox \"613bf95c4d942354e5e861e45e48c220f5c0bad2f34d038c0a30ceca333597ea\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b1e39909d338cc285c0ffe1ae86feb0531919800e9a33cec9ee2e9502a0bd41e\"" May 9 00:29:23.698897 containerd[1475]: time="2025-05-09T00:29:23.698830624Z" level=info msg="StartContainer for \"b1e39909d338cc285c0ffe1ae86feb0531919800e9a33cec9ee2e9502a0bd41e\"" May 9 00:29:23.735015 systemd[1]: Started cri-containerd-b1e39909d338cc285c0ffe1ae86feb0531919800e9a33cec9ee2e9502a0bd41e.scope - libcontainer container b1e39909d338cc285c0ffe1ae86feb0531919800e9a33cec9ee2e9502a0bd41e. May 9 00:29:23.761858 containerd[1475]: time="2025-05-09T00:29:23.761798220Z" level=info msg="StartContainer for \"b1e39909d338cc285c0ffe1ae86feb0531919800e9a33cec9ee2e9502a0bd41e\" returns successfully" May 9 00:29:24.368609 kubelet[2506]: I0509 00:29:24.368357 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-bkc7r" podStartSLOduration=1.422698455 podStartE2EDuration="3.368337798s" podCreationTimestamp="2025-05-09 00:29:21 +0000 UTC" firstStartedPulling="2025-05-09 00:29:21.7370246 +0000 UTC m=+7.492687866" lastFinishedPulling="2025-05-09 00:29:23.682663943 +0000 UTC m=+9.438327209" observedRunningTime="2025-05-09 00:29:24.368252426 +0000 UTC m=+10.123915692" watchObservedRunningTime="2025-05-09 00:29:24.368337798 +0000 UTC m=+10.124001064" May 9 00:29:26.729635 systemd[1]: Created slice kubepods-besteffort-pod87a1f652_aa3a_4f06_a166_c7c01ef13eef.slice - libcontainer container kubepods-besteffort-pod87a1f652_aa3a_4f06_a166_c7c01ef13eef.slice. May 9 00:29:26.776550 systemd[1]: Created slice kubepods-besteffort-pod392d4129_4c1f_4100_b8e7_53df29596887.slice - libcontainer container kubepods-besteffort-pod392d4129_4c1f_4100_b8e7_53df29596887.slice. May 9 00:29:26.799654 kubelet[2506]: I0509 00:29:26.799593 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87a1f652-aa3a-4f06-a166-c7c01ef13eef-tigera-ca-bundle\") pod \"calico-typha-8478d55498-4kngc\" (UID: \"87a1f652-aa3a-4f06-a166-c7c01ef13eef\") " pod="calico-system/calico-typha-8478d55498-4kngc" May 9 00:29:26.799654 kubelet[2506]: I0509 00:29:26.799641 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/392d4129-4c1f-4100-b8e7-53df29596887-var-lib-calico\") pod \"calico-node-slqmz\" (UID: \"392d4129-4c1f-4100-b8e7-53df29596887\") " pod="calico-system/calico-node-slqmz" May 9 00:29:26.800165 kubelet[2506]: I0509 00:29:26.799689 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxmdc\" (UniqueName: \"kubernetes.io/projected/87a1f652-aa3a-4f06-a166-c7c01ef13eef-kube-api-access-pxmdc\") pod \"calico-typha-8478d55498-4kngc\" (UID: \"87a1f652-aa3a-4f06-a166-c7c01ef13eef\") " pod="calico-system/calico-typha-8478d55498-4kngc" May 9 00:29:26.800165 kubelet[2506]: I0509 00:29:26.799731 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/392d4129-4c1f-4100-b8e7-53df29596887-node-certs\") pod \"calico-node-slqmz\" (UID: \"392d4129-4c1f-4100-b8e7-53df29596887\") " pod="calico-system/calico-node-slqmz" May 9 00:29:26.800165 kubelet[2506]: I0509 00:29:26.799751 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/392d4129-4c1f-4100-b8e7-53df29596887-cni-net-dir\") pod \"calico-node-slqmz\" (UID: \"392d4129-4c1f-4100-b8e7-53df29596887\") " pod="calico-system/calico-node-slqmz" May 9 00:29:26.800165 kubelet[2506]: I0509 00:29:26.799769 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/392d4129-4c1f-4100-b8e7-53df29596887-flexvol-driver-host\") pod \"calico-node-slqmz\" (UID: \"392d4129-4c1f-4100-b8e7-53df29596887\") " pod="calico-system/calico-node-slqmz" May 9 00:29:26.800165 kubelet[2506]: I0509 00:29:26.799791 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/87a1f652-aa3a-4f06-a166-c7c01ef13eef-typha-certs\") pod \"calico-typha-8478d55498-4kngc\" (UID: \"87a1f652-aa3a-4f06-a166-c7c01ef13eef\") " pod="calico-system/calico-typha-8478d55498-4kngc" May 9 00:29:26.800414 kubelet[2506]: I0509 00:29:26.799811 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/392d4129-4c1f-4100-b8e7-53df29596887-policysync\") pod \"calico-node-slqmz\" (UID: \"392d4129-4c1f-4100-b8e7-53df29596887\") " pod="calico-system/calico-node-slqmz" May 9 00:29:26.800414 kubelet[2506]: I0509 00:29:26.799839 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/392d4129-4c1f-4100-b8e7-53df29596887-var-run-calico\") pod \"calico-node-slqmz\" (UID: \"392d4129-4c1f-4100-b8e7-53df29596887\") " pod="calico-system/calico-node-slqmz" May 9 00:29:26.800414 kubelet[2506]: I0509 00:29:26.799861 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/392d4129-4c1f-4100-b8e7-53df29596887-cni-bin-dir\") pod \"calico-node-slqmz\" (UID: \"392d4129-4c1f-4100-b8e7-53df29596887\") " pod="calico-system/calico-node-slqmz" May 9 00:29:26.800414 kubelet[2506]: I0509 00:29:26.799910 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58hrt\" (UniqueName: \"kubernetes.io/projected/392d4129-4c1f-4100-b8e7-53df29596887-kube-api-access-58hrt\") pod \"calico-node-slqmz\" (UID: \"392d4129-4c1f-4100-b8e7-53df29596887\") " pod="calico-system/calico-node-slqmz" May 9 00:29:26.800414 kubelet[2506]: I0509 00:29:26.799936 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/392d4129-4c1f-4100-b8e7-53df29596887-lib-modules\") pod \"calico-node-slqmz\" (UID: \"392d4129-4c1f-4100-b8e7-53df29596887\") " pod="calico-system/calico-node-slqmz" May 9 00:29:26.800533 kubelet[2506]: I0509 00:29:26.799996 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/392d4129-4c1f-4100-b8e7-53df29596887-xtables-lock\") pod \"calico-node-slqmz\" (UID: \"392d4129-4c1f-4100-b8e7-53df29596887\") " pod="calico-system/calico-node-slqmz" May 9 00:29:26.800533 kubelet[2506]: I0509 00:29:26.800019 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/392d4129-4c1f-4100-b8e7-53df29596887-cni-log-dir\") pod \"calico-node-slqmz\" (UID: \"392d4129-4c1f-4100-b8e7-53df29596887\") " pod="calico-system/calico-node-slqmz" May 9 00:29:26.800533 kubelet[2506]: I0509 00:29:26.800040 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/392d4129-4c1f-4100-b8e7-53df29596887-tigera-ca-bundle\") pod \"calico-node-slqmz\" (UID: \"392d4129-4c1f-4100-b8e7-53df29596887\") " pod="calico-system/calico-node-slqmz" May 9 00:29:26.882482 kubelet[2506]: E0509 00:29:26.882429 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj68l" podUID="7888b668-be66-4298-b746-119a722815e9" May 9 00:29:26.901392 kubelet[2506]: I0509 00:29:26.901325 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7888b668-be66-4298-b746-119a722815e9-registration-dir\") pod \"csi-node-driver-vj68l\" (UID: \"7888b668-be66-4298-b746-119a722815e9\") " pod="calico-system/csi-node-driver-vj68l" May 9 00:29:26.901529 kubelet[2506]: I0509 00:29:26.901443 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7888b668-be66-4298-b746-119a722815e9-varrun\") pod \"csi-node-driver-vj68l\" (UID: \"7888b668-be66-4298-b746-119a722815e9\") " pod="calico-system/csi-node-driver-vj68l" May 9 00:29:26.901588 kubelet[2506]: I0509 00:29:26.901562 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7888b668-be66-4298-b746-119a722815e9-kubelet-dir\") pod \"csi-node-driver-vj68l\" (UID: \"7888b668-be66-4298-b746-119a722815e9\") " pod="calico-system/csi-node-driver-vj68l" May 9 00:29:26.901803 kubelet[2506]: I0509 00:29:26.901597 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7888b668-be66-4298-b746-119a722815e9-socket-dir\") pod \"csi-node-driver-vj68l\" (UID: \"7888b668-be66-4298-b746-119a722815e9\") " pod="calico-system/csi-node-driver-vj68l" May 9 00:29:26.901803 kubelet[2506]: I0509 00:29:26.901624 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp9m8\" (UniqueName: \"kubernetes.io/projected/7888b668-be66-4298-b746-119a722815e9-kube-api-access-vp9m8\") pod \"csi-node-driver-vj68l\" (UID: \"7888b668-be66-4298-b746-119a722815e9\") " pod="calico-system/csi-node-driver-vj68l" May 9 00:29:26.905249 kubelet[2506]: E0509 00:29:26.905200 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.905249 kubelet[2506]: W0509 00:29:26.905227 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.905706 kubelet[2506]: E0509 00:29:26.905307 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.910625 kubelet[2506]: E0509 00:29:26.908298 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.910625 kubelet[2506]: W0509 00:29:26.908320 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.910625 kubelet[2506]: E0509 00:29:26.908346 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.910625 kubelet[2506]: E0509 00:29:26.908563 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.910625 kubelet[2506]: W0509 00:29:26.908572 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.910625 kubelet[2506]: E0509 00:29:26.908580 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.910625 kubelet[2506]: E0509 00:29:26.908817 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.910625 kubelet[2506]: W0509 00:29:26.908834 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.910625 kubelet[2506]: E0509 00:29:26.908934 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.910625 kubelet[2506]: E0509 00:29:26.909074 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.911353 kubelet[2506]: W0509 00:29:26.909082 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.911353 kubelet[2506]: E0509 00:29:26.909155 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.911353 kubelet[2506]: E0509 00:29:26.909286 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.911353 kubelet[2506]: W0509 00:29:26.909293 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.911353 kubelet[2506]: E0509 00:29:26.909372 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.911353 kubelet[2506]: E0509 00:29:26.909931 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.911353 kubelet[2506]: W0509 00:29:26.909939 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.911353 kubelet[2506]: E0509 00:29:26.910047 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.911353 kubelet[2506]: E0509 00:29:26.910166 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.911353 kubelet[2506]: W0509 00:29:26.910174 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.911567 kubelet[2506]: E0509 00:29:26.910269 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.911567 kubelet[2506]: E0509 00:29:26.910384 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.911567 kubelet[2506]: W0509 00:29:26.910392 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.911567 kubelet[2506]: E0509 00:29:26.910466 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.911567 kubelet[2506]: E0509 00:29:26.910591 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.911567 kubelet[2506]: W0509 00:29:26.910597 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.911567 kubelet[2506]: E0509 00:29:26.910682 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.912008 kubelet[2506]: E0509 00:29:26.911988 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.912008 kubelet[2506]: W0509 00:29:26.912005 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.912100 kubelet[2506]: E0509 00:29:26.912090 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.913104 kubelet[2506]: E0509 00:29:26.913007 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.913104 kubelet[2506]: W0509 00:29:26.913022 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.913613 kubelet[2506]: E0509 00:29:26.913112 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.913613 kubelet[2506]: E0509 00:29:26.913232 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.913613 kubelet[2506]: W0509 00:29:26.913239 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.913613 kubelet[2506]: E0509 00:29:26.913307 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.917769 kubelet[2506]: E0509 00:29:26.917423 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.917769 kubelet[2506]: W0509 00:29:26.917448 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.917769 kubelet[2506]: E0509 00:29:26.917483 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.918912 kubelet[2506]: E0509 00:29:26.918143 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.918912 kubelet[2506]: W0509 00:29:26.918159 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.918912 kubelet[2506]: E0509 00:29:26.918197 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.918912 kubelet[2506]: E0509 00:29:26.918437 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.918912 kubelet[2506]: W0509 00:29:26.918449 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.918912 kubelet[2506]: E0509 00:29:26.918550 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.919219 kubelet[2506]: E0509 00:29:26.919201 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.919252 kubelet[2506]: W0509 00:29:26.919229 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.919339 kubelet[2506]: E0509 00:29:26.919311 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.921760 kubelet[2506]: E0509 00:29:26.921717 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.921760 kubelet[2506]: W0509 00:29:26.921746 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.923248 kubelet[2506]: E0509 00:29:26.923066 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.924312 kubelet[2506]: E0509 00:29:26.924286 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.924312 kubelet[2506]: W0509 00:29:26.924307 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.924393 kubelet[2506]: E0509 00:29:26.924347 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.924609 kubelet[2506]: E0509 00:29:26.924595 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.924790 kubelet[2506]: W0509 00:29:26.924671 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.924994 kubelet[2506]: E0509 00:29:26.924981 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.925069 kubelet[2506]: W0509 00:29:26.925057 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.925163 kubelet[2506]: E0509 00:29:26.924994 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.925219 kubelet[2506]: E0509 00:29:26.925205 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.925439 kubelet[2506]: E0509 00:29:26.925427 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.925642 kubelet[2506]: W0509 00:29:26.925504 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.925642 kubelet[2506]: E0509 00:29:26.925548 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.925800 kubelet[2506]: E0509 00:29:26.925781 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.925800 kubelet[2506]: W0509 00:29:26.925797 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.925902 kubelet[2506]: E0509 00:29:26.925875 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.926087 kubelet[2506]: E0509 00:29:26.926070 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.926087 kubelet[2506]: W0509 00:29:26.926085 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.926154 kubelet[2506]: E0509 00:29:26.926123 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.926329 kubelet[2506]: E0509 00:29:26.926313 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.926365 kubelet[2506]: W0509 00:29:26.926328 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.926525 kubelet[2506]: E0509 00:29:26.926464 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.926633 kubelet[2506]: E0509 00:29:26.926620 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.926670 kubelet[2506]: W0509 00:29:26.926632 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.926691 kubelet[2506]: E0509 00:29:26.926680 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.927464 kubelet[2506]: E0509 00:29:26.927443 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.927464 kubelet[2506]: W0509 00:29:26.927459 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.927517 kubelet[2506]: E0509 00:29:26.927501 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.927717 kubelet[2506]: E0509 00:29:26.927701 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.927761 kubelet[2506]: W0509 00:29:26.927716 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.927761 kubelet[2506]: E0509 00:29:26.927748 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.928037 kubelet[2506]: E0509 00:29:26.928021 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.928037 kubelet[2506]: W0509 00:29:26.928035 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.928100 kubelet[2506]: E0509 00:29:26.928069 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.928260 kubelet[2506]: E0509 00:29:26.928246 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.928260 kubelet[2506]: W0509 00:29:26.928258 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.928412 kubelet[2506]: E0509 00:29:26.928298 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.929163 kubelet[2506]: E0509 00:29:26.929138 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.929163 kubelet[2506]: W0509 00:29:26.929159 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.929733 kubelet[2506]: E0509 00:29:26.929282 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.929733 kubelet[2506]: E0509 00:29:26.929428 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.929733 kubelet[2506]: W0509 00:29:26.929440 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.929733 kubelet[2506]: E0509 00:29:26.929487 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.929733 kubelet[2506]: E0509 00:29:26.929717 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.929733 kubelet[2506]: W0509 00:29:26.929727 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.929906 kubelet[2506]: E0509 00:29:26.929768 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.930094 kubelet[2506]: E0509 00:29:26.930078 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.930094 kubelet[2506]: W0509 00:29:26.930092 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.930316 kubelet[2506]: E0509 00:29:26.930234 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.930352 kubelet[2506]: E0509 00:29:26.930336 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.930352 kubelet[2506]: W0509 00:29:26.930343 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.930482 kubelet[2506]: E0509 00:29:26.930464 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.930615 kubelet[2506]: E0509 00:29:26.930589 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.930615 kubelet[2506]: W0509 00:29:26.930599 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.930615 kubelet[2506]: E0509 00:29:26.930608 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:26.936209 kubelet[2506]: E0509 00:29:26.936174 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:26.936209 kubelet[2506]: W0509 00:29:26.936202 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:26.936320 kubelet[2506]: E0509 00:29:26.936223 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.002377 kubelet[2506]: E0509 00:29:27.002265 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.002377 kubelet[2506]: W0509 00:29:27.002287 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.002377 kubelet[2506]: E0509 00:29:27.002305 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.002570 kubelet[2506]: E0509 00:29:27.002551 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.002599 kubelet[2506]: W0509 00:29:27.002567 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.002599 kubelet[2506]: E0509 00:29:27.002580 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.002901 kubelet[2506]: E0509 00:29:27.002854 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.002901 kubelet[2506]: W0509 00:29:27.002870 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.002901 kubelet[2506]: E0509 00:29:27.002882 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.003167 kubelet[2506]: E0509 00:29:27.003153 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.003167 kubelet[2506]: W0509 00:29:27.003165 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.003255 kubelet[2506]: E0509 00:29:27.003178 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.003476 kubelet[2506]: E0509 00:29:27.003459 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.003476 kubelet[2506]: W0509 00:29:27.003470 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.003633 kubelet[2506]: E0509 00:29:27.003506 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.003744 kubelet[2506]: E0509 00:29:27.003725 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.003744 kubelet[2506]: W0509 00:29:27.003739 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.003845 kubelet[2506]: E0509 00:29:27.003753 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.004050 kubelet[2506]: E0509 00:29:27.004034 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.004050 kubelet[2506]: W0509 00:29:27.004046 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.004177 kubelet[2506]: E0509 00:29:27.004084 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.004386 kubelet[2506]: E0509 00:29:27.004368 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.004386 kubelet[2506]: W0509 00:29:27.004380 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.004479 kubelet[2506]: E0509 00:29:27.004466 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.004605 kubelet[2506]: E0509 00:29:27.004586 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.004605 kubelet[2506]: W0509 00:29:27.004601 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.004738 kubelet[2506]: E0509 00:29:27.004683 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.005042 kubelet[2506]: E0509 00:29:27.004956 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.005042 kubelet[2506]: W0509 00:29:27.004970 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.005042 kubelet[2506]: E0509 00:29:27.005010 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.005214 kubelet[2506]: E0509 00:29:27.005195 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.005214 kubelet[2506]: W0509 00:29:27.005210 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.005334 kubelet[2506]: E0509 00:29:27.005306 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.005516 kubelet[2506]: E0509 00:29:27.005448 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.005516 kubelet[2506]: W0509 00:29:27.005463 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.005516 kubelet[2506]: E0509 00:29:27.005493 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.005798 kubelet[2506]: E0509 00:29:27.005762 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.005798 kubelet[2506]: W0509 00:29:27.005777 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.005798 kubelet[2506]: E0509 00:29:27.005796 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.006079 kubelet[2506]: E0509 00:29:27.006065 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.006079 kubelet[2506]: W0509 00:29:27.006074 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.006170 kubelet[2506]: E0509 00:29:27.006090 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.006290 kubelet[2506]: E0509 00:29:27.006277 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.006290 kubelet[2506]: W0509 00:29:27.006287 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.006359 kubelet[2506]: E0509 00:29:27.006316 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.006468 kubelet[2506]: E0509 00:29:27.006456 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.006468 kubelet[2506]: W0509 00:29:27.006465 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.006550 kubelet[2506]: E0509 00:29:27.006501 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.006676 kubelet[2506]: E0509 00:29:27.006660 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.006676 kubelet[2506]: W0509 00:29:27.006670 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.006740 kubelet[2506]: E0509 00:29:27.006696 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.006904 kubelet[2506]: E0509 00:29:27.006858 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.006904 kubelet[2506]: W0509 00:29:27.006868 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.006987 kubelet[2506]: E0509 00:29:27.006945 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.007074 kubelet[2506]: E0509 00:29:27.007060 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.007074 kubelet[2506]: W0509 00:29:27.007069 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.007127 kubelet[2506]: E0509 00:29:27.007081 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.007252 kubelet[2506]: E0509 00:29:27.007239 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.007252 kubelet[2506]: W0509 00:29:27.007249 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.007328 kubelet[2506]: E0509 00:29:27.007261 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.007473 kubelet[2506]: E0509 00:29:27.007455 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.007473 kubelet[2506]: W0509 00:29:27.007469 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.007559 kubelet[2506]: E0509 00:29:27.007487 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.007711 kubelet[2506]: E0509 00:29:27.007695 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.007711 kubelet[2506]: W0509 00:29:27.007707 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.007802 kubelet[2506]: E0509 00:29:27.007723 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.008033 kubelet[2506]: E0509 00:29:27.008017 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.008033 kubelet[2506]: W0509 00:29:27.008029 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.008122 kubelet[2506]: E0509 00:29:27.008052 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.008322 kubelet[2506]: E0509 00:29:27.008304 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.008371 kubelet[2506]: W0509 00:29:27.008325 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.008371 kubelet[2506]: E0509 00:29:27.008338 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.008577 kubelet[2506]: E0509 00:29:27.008549 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.008577 kubelet[2506]: W0509 00:29:27.008560 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.008649 kubelet[2506]: E0509 00:29:27.008569 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.016127 kubelet[2506]: E0509 00:29:27.016100 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:27.016127 kubelet[2506]: W0509 00:29:27.016121 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:27.016250 kubelet[2506]: E0509 00:29:27.016140 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:27.035340 kubelet[2506]: E0509 00:29:27.035299 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:27.035852 containerd[1475]: time="2025-05-09T00:29:27.035799686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8478d55498-4kngc,Uid:87a1f652-aa3a-4f06-a166-c7c01ef13eef,Namespace:calico-system,Attempt:0,}" May 9 00:29:27.062434 containerd[1475]: time="2025-05-09T00:29:27.061937883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:29:27.062434 containerd[1475]: time="2025-05-09T00:29:27.062038314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:29:27.062434 containerd[1475]: time="2025-05-09T00:29:27.062052841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:27.062434 containerd[1475]: time="2025-05-09T00:29:27.062215700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:27.080028 kubelet[2506]: E0509 00:29:27.079996 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:27.080618 containerd[1475]: time="2025-05-09T00:29:27.080577749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-slqmz,Uid:392d4129-4c1f-4100-b8e7-53df29596887,Namespace:calico-system,Attempt:0,}" May 9 00:29:27.085068 systemd[1]: Started cri-containerd-2cde7f9f28be1ec84f9e9b7ec80ee79977bb47e0f79257ea1445531084f7ec3c.scope - libcontainer container 2cde7f9f28be1ec84f9e9b7ec80ee79977bb47e0f79257ea1445531084f7ec3c. May 9 00:29:27.109676 containerd[1475]: time="2025-05-09T00:29:27.108588778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:29:27.110111 containerd[1475]: time="2025-05-09T00:29:27.109813173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:29:27.110111 containerd[1475]: time="2025-05-09T00:29:27.109838943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:27.110361 containerd[1475]: time="2025-05-09T00:29:27.110300616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:27.133054 systemd[1]: Started cri-containerd-3ff1f024e50a9e4966b1d6c88336ac0e06e9855849373c6ac5660d5cdb745569.scope - libcontainer container 3ff1f024e50a9e4966b1d6c88336ac0e06e9855849373c6ac5660d5cdb745569. May 9 00:29:27.134158 containerd[1475]: time="2025-05-09T00:29:27.134124826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8478d55498-4kngc,Uid:87a1f652-aa3a-4f06-a166-c7c01ef13eef,Namespace:calico-system,Attempt:0,} returns sandbox id \"2cde7f9f28be1ec84f9e9b7ec80ee79977bb47e0f79257ea1445531084f7ec3c\"" May 9 00:29:27.134803 kubelet[2506]: E0509 00:29:27.134772 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:27.136462 containerd[1475]: time="2025-05-09T00:29:27.136265054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 9 00:29:27.159244 containerd[1475]: time="2025-05-09T00:29:27.159205713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-slqmz,Uid:392d4129-4c1f-4100-b8e7-53df29596887,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ff1f024e50a9e4966b1d6c88336ac0e06e9855849373c6ac5660d5cdb745569\"" May 9 00:29:27.160010 kubelet[2506]: E0509 00:29:27.159983 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:28.008359 kubelet[2506]: E0509 00:29:28.008312 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:28.095737 kubelet[2506]: E0509 00:29:28.095690 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.095737 kubelet[2506]: W0509 00:29:28.095718 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.095737 kubelet[2506]: E0509 00:29:28.095744 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.096028 kubelet[2506]: E0509 00:29:28.096004 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.096028 kubelet[2506]: W0509 00:29:28.096019 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.096094 kubelet[2506]: E0509 00:29:28.096033 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.096299 kubelet[2506]: E0509 00:29:28.096267 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.096299 kubelet[2506]: W0509 00:29:28.096280 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.096299 kubelet[2506]: E0509 00:29:28.096292 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.096549 kubelet[2506]: E0509 00:29:28.096519 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.096549 kubelet[2506]: W0509 00:29:28.096532 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.096549 kubelet[2506]: E0509 00:29:28.096544 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.096798 kubelet[2506]: E0509 00:29:28.096768 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.096798 kubelet[2506]: W0509 00:29:28.096781 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.096798 kubelet[2506]: E0509 00:29:28.096792 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.097070 kubelet[2506]: E0509 00:29:28.097040 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.097070 kubelet[2506]: W0509 00:29:28.097054 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.097070 kubelet[2506]: E0509 00:29:28.097067 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.097312 kubelet[2506]: E0509 00:29:28.097290 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.097312 kubelet[2506]: W0509 00:29:28.097302 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.097536 kubelet[2506]: E0509 00:29:28.097314 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.097576 kubelet[2506]: E0509 00:29:28.097535 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.097576 kubelet[2506]: W0509 00:29:28.097546 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.097576 kubelet[2506]: E0509 00:29:28.097557 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.097798 kubelet[2506]: E0509 00:29:28.097776 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.097798 kubelet[2506]: W0509 00:29:28.097788 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.097798 kubelet[2506]: E0509 00:29:28.097798 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.098064 kubelet[2506]: E0509 00:29:28.098041 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.098064 kubelet[2506]: W0509 00:29:28.098053 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.098064 kubelet[2506]: E0509 00:29:28.098065 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.098300 kubelet[2506]: E0509 00:29:28.098278 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.098300 kubelet[2506]: W0509 00:29:28.098291 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.098300 kubelet[2506]: E0509 00:29:28.098301 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.098538 kubelet[2506]: E0509 00:29:28.098516 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.098538 kubelet[2506]: W0509 00:29:28.098528 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.098604 kubelet[2506]: E0509 00:29:28.098540 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.098781 kubelet[2506]: E0509 00:29:28.098760 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.098781 kubelet[2506]: W0509 00:29:28.098772 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.098781 kubelet[2506]: E0509 00:29:28.098783 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.099058 kubelet[2506]: E0509 00:29:28.099036 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.099058 kubelet[2506]: W0509 00:29:28.099049 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.099058 kubelet[2506]: E0509 00:29:28.099060 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.099302 kubelet[2506]: E0509 00:29:28.099281 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.099302 kubelet[2506]: W0509 00:29:28.099293 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.099302 kubelet[2506]: E0509 00:29:28.099304 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.099543 kubelet[2506]: E0509 00:29:28.099521 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.099543 kubelet[2506]: W0509 00:29:28.099533 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.099610 kubelet[2506]: E0509 00:29:28.099545 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.099785 kubelet[2506]: E0509 00:29:28.099764 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.099785 kubelet[2506]: W0509 00:29:28.099776 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.099873 kubelet[2506]: E0509 00:29:28.099787 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.100048 kubelet[2506]: E0509 00:29:28.100026 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.100048 kubelet[2506]: W0509 00:29:28.100039 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.100115 kubelet[2506]: E0509 00:29:28.100051 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.100286 kubelet[2506]: E0509 00:29:28.100265 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.100286 kubelet[2506]: W0509 00:29:28.100277 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.100371 kubelet[2506]: E0509 00:29:28.100288 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.100526 kubelet[2506]: E0509 00:29:28.100504 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.100526 kubelet[2506]: W0509 00:29:28.100517 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.100591 kubelet[2506]: E0509 00:29:28.100527 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.100767 kubelet[2506]: E0509 00:29:28.100745 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.100767 kubelet[2506]: W0509 00:29:28.100757 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.100861 kubelet[2506]: E0509 00:29:28.100767 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.101038 kubelet[2506]: E0509 00:29:28.101015 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.101038 kubelet[2506]: W0509 00:29:28.101028 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.101116 kubelet[2506]: E0509 00:29:28.101039 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.101276 kubelet[2506]: E0509 00:29:28.101254 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.101276 kubelet[2506]: W0509 00:29:28.101267 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.101362 kubelet[2506]: E0509 00:29:28.101278 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.101518 kubelet[2506]: E0509 00:29:28.101496 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.101518 kubelet[2506]: W0509 00:29:28.101507 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.101518 kubelet[2506]: E0509 00:29:28.101518 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.101765 kubelet[2506]: E0509 00:29:28.101731 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.101765 kubelet[2506]: W0509 00:29:28.101755 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.101955 kubelet[2506]: E0509 00:29:28.101766 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.331545 kubelet[2506]: E0509 00:29:28.331408 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj68l" podUID="7888b668-be66-4298-b746-119a722815e9" May 9 00:29:28.699981 kubelet[2506]: E0509 00:29:28.699925 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:28.707465 kubelet[2506]: E0509 00:29:28.707062 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.707465 kubelet[2506]: W0509 00:29:28.707094 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.707465 kubelet[2506]: E0509 00:29:28.707114 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.707465 kubelet[2506]: E0509 00:29:28.707377 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.707465 kubelet[2506]: W0509 00:29:28.707387 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.707465 kubelet[2506]: E0509 00:29:28.707397 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.708276 kubelet[2506]: E0509 00:29:28.708056 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.708276 kubelet[2506]: W0509 00:29:28.708092 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.708276 kubelet[2506]: E0509 00:29:28.708146 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.708752 kubelet[2506]: E0509 00:29:28.708634 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.708752 kubelet[2506]: W0509 00:29:28.708649 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.708752 kubelet[2506]: E0509 00:29:28.708661 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.709063 kubelet[2506]: E0509 00:29:28.709048 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.709210 kubelet[2506]: W0509 00:29:28.709118 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.709210 kubelet[2506]: E0509 00:29:28.709132 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.709559 kubelet[2506]: E0509 00:29:28.709493 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.709559 kubelet[2506]: W0509 00:29:28.709510 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.709559 kubelet[2506]: E0509 00:29:28.709522 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.709818 kubelet[2506]: E0509 00:29:28.709787 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.709818 kubelet[2506]: W0509 00:29:28.709800 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.709818 kubelet[2506]: E0509 00:29:28.709818 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.710187 kubelet[2506]: E0509 00:29:28.710161 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.710187 kubelet[2506]: W0509 00:29:28.710176 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.710187 kubelet[2506]: E0509 00:29:28.710186 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.710444 kubelet[2506]: E0509 00:29:28.710428 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.710483 kubelet[2506]: W0509 00:29:28.710451 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.710483 kubelet[2506]: E0509 00:29:28.710462 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.710694 kubelet[2506]: E0509 00:29:28.710678 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.710694 kubelet[2506]: W0509 00:29:28.710689 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.710757 kubelet[2506]: E0509 00:29:28.710698 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.711464 kubelet[2506]: E0509 00:29:28.711416 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.711464 kubelet[2506]: W0509 00:29:28.711438 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.711464 kubelet[2506]: E0509 00:29:28.711454 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.711734 kubelet[2506]: E0509 00:29:28.711702 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.711734 kubelet[2506]: W0509 00:29:28.711715 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.711734 kubelet[2506]: E0509 00:29:28.711728 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.712014 kubelet[2506]: E0509 00:29:28.711994 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.712014 kubelet[2506]: W0509 00:29:28.712010 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.712014 kubelet[2506]: E0509 00:29:28.712026 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.712338 kubelet[2506]: E0509 00:29:28.712312 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.712338 kubelet[2506]: W0509 00:29:28.712330 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.712445 kubelet[2506]: E0509 00:29:28.712345 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:28.712648 kubelet[2506]: E0509 00:29:28.712629 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:28.712648 kubelet[2506]: W0509 00:29:28.712644 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:28.712747 kubelet[2506]: E0509 00:29:28.712656 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.091937 containerd[1475]: time="2025-05-09T00:29:30.091859292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:30.093032 containerd[1475]: time="2025-05-09T00:29:30.092961733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 9 00:29:30.094398 containerd[1475]: time="2025-05-09T00:29:30.094347932Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:30.097965 containerd[1475]: time="2025-05-09T00:29:30.097904629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:30.098464 containerd[1475]: time="2025-05-09T00:29:30.098425423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.962133177s" May 9 00:29:30.098464 containerd[1475]: time="2025-05-09T00:29:30.098454096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 9 00:29:30.099490 containerd[1475]: time="2025-05-09T00:29:30.099449877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 9 00:29:30.106755 containerd[1475]: time="2025-05-09T00:29:30.106707124Z" level=info msg="CreateContainer within sandbox \"2cde7f9f28be1ec84f9e9b7ec80ee79977bb47e0f79257ea1445531084f7ec3c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 9 00:29:30.127214 containerd[1475]: time="2025-05-09T00:29:30.127159650Z" level=info msg="CreateContainer within sandbox \"2cde7f9f28be1ec84f9e9b7ec80ee79977bb47e0f79257ea1445531084f7ec3c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9e9d2dada98fca55de85f3127c5ab5963a560d868487cce85438cafa3a878cbd\"" May 9 00:29:30.127878 containerd[1475]: time="2025-05-09T00:29:30.127828373Z" level=info msg="StartContainer for \"9e9d2dada98fca55de85f3127c5ab5963a560d868487cce85438cafa3a878cbd\"" May 9 00:29:30.159124 systemd[1]: Started cri-containerd-9e9d2dada98fca55de85f3127c5ab5963a560d868487cce85438cafa3a878cbd.scope - libcontainer container 9e9d2dada98fca55de85f3127c5ab5963a560d868487cce85438cafa3a878cbd. May 9 00:29:30.200548 containerd[1475]: time="2025-05-09T00:29:30.200494453Z" level=info msg="StartContainer for \"9e9d2dada98fca55de85f3127c5ab5963a560d868487cce85438cafa3a878cbd\" returns successfully" May 9 00:29:30.332115 kubelet[2506]: E0509 00:29:30.332061 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj68l" podUID="7888b668-be66-4298-b746-119a722815e9" May 9 00:29:30.380051 kubelet[2506]: E0509 00:29:30.379938 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:30.388578 kubelet[2506]: I0509 00:29:30.388528 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8478d55498-4kngc" podStartSLOduration=1.425389088 podStartE2EDuration="4.388513338s" podCreationTimestamp="2025-05-09 00:29:26 +0000 UTC" firstStartedPulling="2025-05-09 00:29:27.136054606 +0000 UTC m=+12.891717872" lastFinishedPulling="2025-05-09 00:29:30.099178856 +0000 UTC m=+15.854842122" observedRunningTime="2025-05-09 00:29:30.38843538 +0000 UTC m=+16.144098646" watchObservedRunningTime="2025-05-09 00:29:30.388513338 +0000 UTC m=+16.144176604" May 9 00:29:30.422399 kubelet[2506]: E0509 00:29:30.422348 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.422399 kubelet[2506]: W0509 00:29:30.422377 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.422399 kubelet[2506]: E0509 00:29:30.422402 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.422708 kubelet[2506]: E0509 00:29:30.422687 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.422708 kubelet[2506]: W0509 00:29:30.422697 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.422708 kubelet[2506]: E0509 00:29:30.422706 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.422934 kubelet[2506]: E0509 00:29:30.422918 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.422934 kubelet[2506]: W0509 00:29:30.422929 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.422987 kubelet[2506]: E0509 00:29:30.422937 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.423118 kubelet[2506]: E0509 00:29:30.423103 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.423118 kubelet[2506]: W0509 00:29:30.423113 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.423166 kubelet[2506]: E0509 00:29:30.423121 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.423303 kubelet[2506]: E0509 00:29:30.423286 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.423332 kubelet[2506]: W0509 00:29:30.423304 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.423332 kubelet[2506]: E0509 00:29:30.423314 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.423491 kubelet[2506]: E0509 00:29:30.423477 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.423491 kubelet[2506]: W0509 00:29:30.423486 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.423539 kubelet[2506]: E0509 00:29:30.423494 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.423662 kubelet[2506]: E0509 00:29:30.423647 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.423662 kubelet[2506]: W0509 00:29:30.423657 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.423703 kubelet[2506]: E0509 00:29:30.423665 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.423841 kubelet[2506]: E0509 00:29:30.423826 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.423841 kubelet[2506]: W0509 00:29:30.423836 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.423908 kubelet[2506]: E0509 00:29:30.423845 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.424050 kubelet[2506]: E0509 00:29:30.424034 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.424050 kubelet[2506]: W0509 00:29:30.424044 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.424090 kubelet[2506]: E0509 00:29:30.424052 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.424223 kubelet[2506]: E0509 00:29:30.424208 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.424223 kubelet[2506]: W0509 00:29:30.424217 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.424269 kubelet[2506]: E0509 00:29:30.424225 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.424388 kubelet[2506]: E0509 00:29:30.424373 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.424388 kubelet[2506]: W0509 00:29:30.424384 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.424432 kubelet[2506]: E0509 00:29:30.424392 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.424560 kubelet[2506]: E0509 00:29:30.424544 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.424560 kubelet[2506]: W0509 00:29:30.424554 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.424624 kubelet[2506]: E0509 00:29:30.424573 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.424751 kubelet[2506]: E0509 00:29:30.424735 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.424751 kubelet[2506]: W0509 00:29:30.424744 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.424813 kubelet[2506]: E0509 00:29:30.424752 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.424944 kubelet[2506]: E0509 00:29:30.424934 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.424974 kubelet[2506]: W0509 00:29:30.424943 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.424974 kubelet[2506]: E0509 00:29:30.424951 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.425120 kubelet[2506]: E0509 00:29:30.425108 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.425120 kubelet[2506]: W0509 00:29:30.425117 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.425171 kubelet[2506]: E0509 00:29:30.425125 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.425396 kubelet[2506]: E0509 00:29:30.425378 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.425396 kubelet[2506]: W0509 00:29:30.425389 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.425449 kubelet[2506]: E0509 00:29:30.425397 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.425612 kubelet[2506]: E0509 00:29:30.425600 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.425612 kubelet[2506]: W0509 00:29:30.425609 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.425801 kubelet[2506]: E0509 00:29:30.425622 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.425918 kubelet[2506]: E0509 00:29:30.425878 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.425955 kubelet[2506]: W0509 00:29:30.425917 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.425955 kubelet[2506]: E0509 00:29:30.425940 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.426156 kubelet[2506]: E0509 00:29:30.426140 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.426156 kubelet[2506]: W0509 00:29:30.426151 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.426212 kubelet[2506]: E0509 00:29:30.426169 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.426396 kubelet[2506]: E0509 00:29:30.426381 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.426396 kubelet[2506]: W0509 00:29:30.426392 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.426446 kubelet[2506]: E0509 00:29:30.426407 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.426669 kubelet[2506]: E0509 00:29:30.426653 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.426669 kubelet[2506]: W0509 00:29:30.426668 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.426727 kubelet[2506]: E0509 00:29:30.426682 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.426935 kubelet[2506]: E0509 00:29:30.426902 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.426935 kubelet[2506]: W0509 00:29:30.426921 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.426935 kubelet[2506]: E0509 00:29:30.426940 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.427161 kubelet[2506]: E0509 00:29:30.427134 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.427161 kubelet[2506]: W0509 00:29:30.427145 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.427161 kubelet[2506]: E0509 00:29:30.427158 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.427340 kubelet[2506]: E0509 00:29:30.427324 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.427340 kubelet[2506]: W0509 00:29:30.427336 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.427386 kubelet[2506]: E0509 00:29:30.427347 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.427526 kubelet[2506]: E0509 00:29:30.427512 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.427526 kubelet[2506]: W0509 00:29:30.427523 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.427567 kubelet[2506]: E0509 00:29:30.427534 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.427732 kubelet[2506]: E0509 00:29:30.427718 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.427732 kubelet[2506]: W0509 00:29:30.427729 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.427787 kubelet[2506]: E0509 00:29:30.427742 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.427991 kubelet[2506]: E0509 00:29:30.427974 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.427991 kubelet[2506]: W0509 00:29:30.427987 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.428049 kubelet[2506]: E0509 00:29:30.428001 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.428206 kubelet[2506]: E0509 00:29:30.428192 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.428206 kubelet[2506]: W0509 00:29:30.428203 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.428247 kubelet[2506]: E0509 00:29:30.428214 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.428431 kubelet[2506]: E0509 00:29:30.428410 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.428431 kubelet[2506]: W0509 00:29:30.428426 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.428507 kubelet[2506]: E0509 00:29:30.428444 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.428675 kubelet[2506]: E0509 00:29:30.428659 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.428675 kubelet[2506]: W0509 00:29:30.428671 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.428722 kubelet[2506]: E0509 00:29:30.428685 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.428920 kubelet[2506]: E0509 00:29:30.428906 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.428944 kubelet[2506]: W0509 00:29:30.428918 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.428944 kubelet[2506]: E0509 00:29:30.428932 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.429139 kubelet[2506]: E0509 00:29:30.429126 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.429139 kubelet[2506]: W0509 00:29:30.429139 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.429197 kubelet[2506]: E0509 00:29:30.429147 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:30.429541 kubelet[2506]: E0509 00:29:30.429522 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:30.429541 kubelet[2506]: W0509 00:29:30.429533 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:30.429583 kubelet[2506]: E0509 00:29:30.429542 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.383121 kubelet[2506]: I0509 00:29:31.383068 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:29:31.383599 kubelet[2506]: E0509 00:29:31.383458 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:31.430864 kubelet[2506]: E0509 00:29:31.430794 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.430864 kubelet[2506]: W0509 00:29:31.430844 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.430864 kubelet[2506]: E0509 00:29:31.430867 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.434032 kubelet[2506]: E0509 00:29:31.433982 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.434032 kubelet[2506]: W0509 00:29:31.434021 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.434032 kubelet[2506]: E0509 00:29:31.434034 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.434309 kubelet[2506]: E0509 00:29:31.434283 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.434309 kubelet[2506]: W0509 00:29:31.434298 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.434309 kubelet[2506]: E0509 00:29:31.434307 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.434557 kubelet[2506]: E0509 00:29:31.434533 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.434557 kubelet[2506]: W0509 00:29:31.434548 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.434557 kubelet[2506]: E0509 00:29:31.434556 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.434856 kubelet[2506]: E0509 00:29:31.434819 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.434856 kubelet[2506]: W0509 00:29:31.434835 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.434976 kubelet[2506]: E0509 00:29:31.434860 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.435597 kubelet[2506]: E0509 00:29:31.435067 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.435597 kubelet[2506]: W0509 00:29:31.435078 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.435597 kubelet[2506]: E0509 00:29:31.435086 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.435597 kubelet[2506]: E0509 00:29:31.435292 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.435597 kubelet[2506]: W0509 00:29:31.435302 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.435597 kubelet[2506]: E0509 00:29:31.435311 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.435597 kubelet[2506]: E0509 00:29:31.435535 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.435597 kubelet[2506]: W0509 00:29:31.435543 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.435597 kubelet[2506]: E0509 00:29:31.435552 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.435906 kubelet[2506]: E0509 00:29:31.435789 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.435906 kubelet[2506]: W0509 00:29:31.435798 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.435906 kubelet[2506]: E0509 00:29:31.435806 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.436058 kubelet[2506]: E0509 00:29:31.436034 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.436058 kubelet[2506]: W0509 00:29:31.436050 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.436139 kubelet[2506]: E0509 00:29:31.436067 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.436295 kubelet[2506]: E0509 00:29:31.436270 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.436295 kubelet[2506]: W0509 00:29:31.436285 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.436295 kubelet[2506]: E0509 00:29:31.436293 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.436932 kubelet[2506]: E0509 00:29:31.436515 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.436932 kubelet[2506]: W0509 00:29:31.436527 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.436932 kubelet[2506]: E0509 00:29:31.436535 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.436932 kubelet[2506]: E0509 00:29:31.436807 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.436932 kubelet[2506]: W0509 00:29:31.436815 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.436932 kubelet[2506]: E0509 00:29:31.436824 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.437155 kubelet[2506]: E0509 00:29:31.437051 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.437155 kubelet[2506]: W0509 00:29:31.437060 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.437155 kubelet[2506]: E0509 00:29:31.437068 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.437304 kubelet[2506]: E0509 00:29:31.437278 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.437304 kubelet[2506]: W0509 00:29:31.437293 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.437304 kubelet[2506]: E0509 00:29:31.437301 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.437712 kubelet[2506]: E0509 00:29:31.437670 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.437712 kubelet[2506]: W0509 00:29:31.437687 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.437712 kubelet[2506]: E0509 00:29:31.437695 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.438056 kubelet[2506]: E0509 00:29:31.438029 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.438056 kubelet[2506]: W0509 00:29:31.438044 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.438056 kubelet[2506]: E0509 00:29:31.438054 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.442926 kubelet[2506]: E0509 00:29:31.441179 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.442926 kubelet[2506]: W0509 00:29:31.441198 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.442926 kubelet[2506]: E0509 00:29:31.441253 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.442926 kubelet[2506]: E0509 00:29:31.441437 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.442926 kubelet[2506]: W0509 00:29:31.441446 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.442926 kubelet[2506]: E0509 00:29:31.441481 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.442926 kubelet[2506]: E0509 00:29:31.441632 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.442926 kubelet[2506]: W0509 00:29:31.441640 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.442926 kubelet[2506]: E0509 00:29:31.441678 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.442926 kubelet[2506]: E0509 00:29:31.441833 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.443317 kubelet[2506]: W0509 00:29:31.441841 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.443317 kubelet[2506]: E0509 00:29:31.441861 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.443317 kubelet[2506]: E0509 00:29:31.442069 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.443317 kubelet[2506]: W0509 00:29:31.442076 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.443317 kubelet[2506]: E0509 00:29:31.442093 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.443317 kubelet[2506]: E0509 00:29:31.442292 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.443317 kubelet[2506]: W0509 00:29:31.442300 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.443317 kubelet[2506]: E0509 00:29:31.442312 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.443317 kubelet[2506]: E0509 00:29:31.442542 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.443317 kubelet[2506]: W0509 00:29:31.442550 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.443908 kubelet[2506]: E0509 00:29:31.442571 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.443908 kubelet[2506]: E0509 00:29:31.443039 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.443908 kubelet[2506]: W0509 00:29:31.443049 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.443908 kubelet[2506]: E0509 00:29:31.443063 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.443908 kubelet[2506]: E0509 00:29:31.443245 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.443908 kubelet[2506]: W0509 00:29:31.443253 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.443908 kubelet[2506]: E0509 00:29:31.443308 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.443908 kubelet[2506]: E0509 00:29:31.443624 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.443908 kubelet[2506]: W0509 00:29:31.443632 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.443908 kubelet[2506]: E0509 00:29:31.443666 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.444240 kubelet[2506]: E0509 00:29:31.443839 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.444240 kubelet[2506]: W0509 00:29:31.443848 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.444240 kubelet[2506]: E0509 00:29:31.443865 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.444332 kubelet[2506]: E0509 00:29:31.444273 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.444332 kubelet[2506]: W0509 00:29:31.444289 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.444332 kubelet[2506]: E0509 00:29:31.444317 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.444771 kubelet[2506]: E0509 00:29:31.444737 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.444771 kubelet[2506]: W0509 00:29:31.444756 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.444771 kubelet[2506]: E0509 00:29:31.444769 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.445076 kubelet[2506]: E0509 00:29:31.445048 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.445076 kubelet[2506]: W0509 00:29:31.445065 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.445171 kubelet[2506]: E0509 00:29:31.445124 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.447421 kubelet[2506]: E0509 00:29:31.447388 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.447421 kubelet[2506]: W0509 00:29:31.447407 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.447421 kubelet[2506]: E0509 00:29:31.447421 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.447647 kubelet[2506]: E0509 00:29:31.447620 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 00:29:31.447647 kubelet[2506]: W0509 00:29:31.447637 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 00:29:31.447647 kubelet[2506]: E0509 00:29:31.447645 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 00:29:31.624093 containerd[1475]: time="2025-05-09T00:29:31.624017411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:31.624767 containerd[1475]: time="2025-05-09T00:29:31.624704669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 9 00:29:31.626181 containerd[1475]: time="2025-05-09T00:29:31.626134248Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:31.628293 containerd[1475]: time="2025-05-09T00:29:31.628244071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:31.628861 containerd[1475]: time="2025-05-09T00:29:31.628810019Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.52932278s" May 9 00:29:31.628861 containerd[1475]: time="2025-05-09T00:29:31.628855615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 9 00:29:31.631177 containerd[1475]: time="2025-05-09T00:29:31.631130018Z" level=info msg="CreateContainer within sandbox \"3ff1f024e50a9e4966b1d6c88336ac0e06e9855849373c6ac5660d5cdb745569\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 9 00:29:31.651215 containerd[1475]: time="2025-05-09T00:29:31.651142599Z" level=info msg="CreateContainer within sandbox \"3ff1f024e50a9e4966b1d6c88336ac0e06e9855849373c6ac5660d5cdb745569\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5a39aaf6a15b4fa7276ad4d6ccfde1cf8c00f8660979e188a6e8421df79ff613\"" May 9 00:29:31.651797 containerd[1475]: time="2025-05-09T00:29:31.651736179Z" level=info msg="StartContainer for \"5a39aaf6a15b4fa7276ad4d6ccfde1cf8c00f8660979e188a6e8421df79ff613\"" May 9 00:29:31.695135 systemd[1]: Started cri-containerd-5a39aaf6a15b4fa7276ad4d6ccfde1cf8c00f8660979e188a6e8421df79ff613.scope - libcontainer container 5a39aaf6a15b4fa7276ad4d6ccfde1cf8c00f8660979e188a6e8421df79ff613. May 9 00:29:31.743805 systemd[1]: cri-containerd-5a39aaf6a15b4fa7276ad4d6ccfde1cf8c00f8660979e188a6e8421df79ff613.scope: Deactivated successfully. May 9 00:29:31.799991 containerd[1475]: time="2025-05-09T00:29:31.799920964Z" level=info msg="StartContainer for \"5a39aaf6a15b4fa7276ad4d6ccfde1cf8c00f8660979e188a6e8421df79ff613\" returns successfully" May 9 00:29:31.986868 containerd[1475]: time="2025-05-09T00:29:31.986688506Z" level=info msg="shim disconnected" id=5a39aaf6a15b4fa7276ad4d6ccfde1cf8c00f8660979e188a6e8421df79ff613 namespace=k8s.io May 9 00:29:31.986868 containerd[1475]: time="2025-05-09T00:29:31.986744772Z" level=warning msg="cleaning up after shim disconnected" id=5a39aaf6a15b4fa7276ad4d6ccfde1cf8c00f8660979e188a6e8421df79ff613 namespace=k8s.io May 9 00:29:31.986868 containerd[1475]: time="2025-05-09T00:29:31.986753698Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:29:32.104520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a39aaf6a15b4fa7276ad4d6ccfde1cf8c00f8660979e188a6e8421df79ff613-rootfs.mount: Deactivated successfully. May 9 00:29:32.331432 kubelet[2506]: E0509 00:29:32.331280 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj68l" podUID="7888b668-be66-4298-b746-119a722815e9" May 9 00:29:32.384631 kubelet[2506]: E0509 00:29:32.384594 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:32.385424 containerd[1475]: time="2025-05-09T00:29:32.385373157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 9 00:29:34.331623 kubelet[2506]: E0509 00:29:34.331570 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj68l" podUID="7888b668-be66-4298-b746-119a722815e9" May 9 00:29:36.331309 kubelet[2506]: E0509 00:29:36.331244 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj68l" podUID="7888b668-be66-4298-b746-119a722815e9" May 9 00:29:36.858454 containerd[1475]: time="2025-05-09T00:29:36.858393681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:36.859288 containerd[1475]: time="2025-05-09T00:29:36.859230388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 9 00:29:36.860478 containerd[1475]: time="2025-05-09T00:29:36.860444585Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:36.862758 containerd[1475]: time="2025-05-09T00:29:36.862665972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:36.863553 containerd[1475]: time="2025-05-09T00:29:36.863509832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 4.478088926s" May 9 00:29:36.863553 containerd[1475]: time="2025-05-09T00:29:36.863548975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 9 00:29:36.865509 containerd[1475]: time="2025-05-09T00:29:36.865480736Z" level=info msg="CreateContainer within sandbox \"3ff1f024e50a9e4966b1d6c88336ac0e06e9855849373c6ac5660d5cdb745569\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 9 00:29:36.881339 containerd[1475]: time="2025-05-09T00:29:36.881288085Z" level=info msg="CreateContainer within sandbox \"3ff1f024e50a9e4966b1d6c88336ac0e06e9855849373c6ac5660d5cdb745569\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bbb4792aec94b1eadf3ac762a5f64c9fe979b2e0cd090bdb8d8d0b5e02ddc8ba\"" May 9 00:29:36.881958 containerd[1475]: time="2025-05-09T00:29:36.881864471Z" level=info msg="StartContainer for \"bbb4792aec94b1eadf3ac762a5f64c9fe979b2e0cd090bdb8d8d0b5e02ddc8ba\"" May 9 00:29:36.932200 systemd[1]: Started cri-containerd-bbb4792aec94b1eadf3ac762a5f64c9fe979b2e0cd090bdb8d8d0b5e02ddc8ba.scope - libcontainer container bbb4792aec94b1eadf3ac762a5f64c9fe979b2e0cd090bdb8d8d0b5e02ddc8ba. May 9 00:29:36.970504 containerd[1475]: time="2025-05-09T00:29:36.970430502Z" level=info msg="StartContainer for \"bbb4792aec94b1eadf3ac762a5f64c9fe979b2e0cd090bdb8d8d0b5e02ddc8ba\" returns successfully" May 9 00:29:37.396120 kubelet[2506]: E0509 00:29:37.396078 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:38.287255 systemd[1]: cri-containerd-bbb4792aec94b1eadf3ac762a5f64c9fe979b2e0cd090bdb8d8d0b5e02ddc8ba.scope: Deactivated successfully. May 9 00:29:38.310093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbb4792aec94b1eadf3ac762a5f64c9fe979b2e0cd090bdb8d8d0b5e02ddc8ba-rootfs.mount: Deactivated successfully. May 9 00:29:38.332093 kubelet[2506]: E0509 00:29:38.332010 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj68l" podUID="7888b668-be66-4298-b746-119a722815e9" May 9 00:29:38.368476 kubelet[2506]: I0509 00:29:38.368418 2506 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 9 00:29:38.397300 kubelet[2506]: E0509 00:29:38.397241 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:38.723850 systemd[1]: Created slice kubepods-besteffort-podfcb70e53_0375_414e_8457_33f88270eeb6.slice - libcontainer container kubepods-besteffort-podfcb70e53_0375_414e_8457_33f88270eeb6.slice. May 9 00:29:38.736640 systemd[1]: Created slice kubepods-besteffort-pod8e3a567d_acd2_4bb0_bcdc_d60762255110.slice - libcontainer container kubepods-besteffort-pod8e3a567d_acd2_4bb0_bcdc_d60762255110.slice. May 9 00:29:38.742124 systemd[1]: Created slice kubepods-burstable-pod035e105a_b89a_4204_9406_d96aaeb0e048.slice - libcontainer container kubepods-burstable-pod035e105a_b89a_4204_9406_d96aaeb0e048.slice. May 9 00:29:38.747763 systemd[1]: Created slice kubepods-burstable-pod75434938_61cf_41bc_bc17_24399a2f1b29.slice - libcontainer container kubepods-burstable-pod75434938_61cf_41bc_bc17_24399a2f1b29.slice. May 9 00:29:38.752912 systemd[1]: Created slice kubepods-besteffort-podc912a440_377c_4394_b47f_ba521b174b03.slice - libcontainer container kubepods-besteffort-podc912a440_377c_4394_b47f_ba521b174b03.slice. May 9 00:29:38.893596 kubelet[2506]: I0509 00:29:38.893527 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p2x2\" (UniqueName: \"kubernetes.io/projected/c912a440-377c-4394-b47f-ba521b174b03-kube-api-access-4p2x2\") pod \"calico-kube-controllers-78559bdf4b-jb9rh\" (UID: \"c912a440-377c-4394-b47f-ba521b174b03\") " pod="calico-system/calico-kube-controllers-78559bdf4b-jb9rh" May 9 00:29:38.893596 kubelet[2506]: I0509 00:29:38.893577 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/035e105a-b89a-4204-9406-d96aaeb0e048-config-volume\") pod \"coredns-6f6b679f8f-7z6wr\" (UID: \"035e105a-b89a-4204-9406-d96aaeb0e048\") " pod="kube-system/coredns-6f6b679f8f-7z6wr" May 9 00:29:38.893596 kubelet[2506]: I0509 00:29:38.893597 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75434938-61cf-41bc-bc17-24399a2f1b29-config-volume\") pod \"coredns-6f6b679f8f-7htnm\" (UID: \"75434938-61cf-41bc-bc17-24399a2f1b29\") " pod="kube-system/coredns-6f6b679f8f-7htnm" May 9 00:29:38.893596 kubelet[2506]: I0509 00:29:38.893614 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhpt4\" (UniqueName: \"kubernetes.io/projected/035e105a-b89a-4204-9406-d96aaeb0e048-kube-api-access-qhpt4\") pod \"coredns-6f6b679f8f-7z6wr\" (UID: \"035e105a-b89a-4204-9406-d96aaeb0e048\") " pod="kube-system/coredns-6f6b679f8f-7z6wr" May 9 00:29:38.893914 kubelet[2506]: I0509 00:29:38.893635 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fcb70e53-0375-414e-8457-33f88270eeb6-calico-apiserver-certs\") pod \"calico-apiserver-54f6f995b9-66cpx\" (UID: \"fcb70e53-0375-414e-8457-33f88270eeb6\") " pod="calico-apiserver/calico-apiserver-54f6f995b9-66cpx" May 9 00:29:38.893914 kubelet[2506]: I0509 00:29:38.893653 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e3a567d-acd2-4bb0-bcdc-d60762255110-calico-apiserver-certs\") pod \"calico-apiserver-54f6f995b9-qzmd8\" (UID: \"8e3a567d-acd2-4bb0-bcdc-d60762255110\") " pod="calico-apiserver/calico-apiserver-54f6f995b9-qzmd8" May 9 00:29:38.893914 kubelet[2506]: I0509 00:29:38.893683 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgxs6\" (UniqueName: \"kubernetes.io/projected/8e3a567d-acd2-4bb0-bcdc-d60762255110-kube-api-access-tgxs6\") pod \"calico-apiserver-54f6f995b9-qzmd8\" (UID: \"8e3a567d-acd2-4bb0-bcdc-d60762255110\") " pod="calico-apiserver/calico-apiserver-54f6f995b9-qzmd8" May 9 00:29:38.893914 kubelet[2506]: I0509 00:29:38.893704 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c912a440-377c-4394-b47f-ba521b174b03-tigera-ca-bundle\") pod \"calico-kube-controllers-78559bdf4b-jb9rh\" (UID: \"c912a440-377c-4394-b47f-ba521b174b03\") " pod="calico-system/calico-kube-controllers-78559bdf4b-jb9rh" May 9 00:29:38.893914 kubelet[2506]: I0509 00:29:38.893718 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tqjx\" (UniqueName: \"kubernetes.io/projected/75434938-61cf-41bc-bc17-24399a2f1b29-kube-api-access-9tqjx\") pod \"coredns-6f6b679f8f-7htnm\" (UID: \"75434938-61cf-41bc-bc17-24399a2f1b29\") " pod="kube-system/coredns-6f6b679f8f-7htnm" May 9 00:29:38.894045 kubelet[2506]: I0509 00:29:38.893746 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hb6s\" (UniqueName: \"kubernetes.io/projected/fcb70e53-0375-414e-8457-33f88270eeb6-kube-api-access-2hb6s\") pod \"calico-apiserver-54f6f995b9-66cpx\" (UID: \"fcb70e53-0375-414e-8457-33f88270eeb6\") " pod="calico-apiserver/calico-apiserver-54f6f995b9-66cpx" May 9 00:29:39.130717 containerd[1475]: time="2025-05-09T00:29:39.130505300Z" level=info msg="shim disconnected" id=bbb4792aec94b1eadf3ac762a5f64c9fe979b2e0cd090bdb8d8d0b5e02ddc8ba namespace=k8s.io May 9 00:29:39.130717 containerd[1475]: time="2025-05-09T00:29:39.130565522Z" level=warning msg="cleaning up after shim disconnected" id=bbb4792aec94b1eadf3ac762a5f64c9fe979b2e0cd090bdb8d8d0b5e02ddc8ba namespace=k8s.io May 9 00:29:39.130717 containerd[1475]: time="2025-05-09T00:29:39.130576643Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:29:39.403627 kubelet[2506]: E0509 00:29:39.403575 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:39.410250 containerd[1475]: time="2025-05-09T00:29:39.410197290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 9 00:29:39.414905 containerd[1475]: time="2025-05-09T00:29:39.414854580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f6f995b9-66cpx,Uid:fcb70e53-0375-414e-8457-33f88270eeb6,Namespace:calico-apiserver,Attempt:0,}" May 9 00:29:39.416868 kubelet[2506]: E0509 00:29:39.416643 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:39.417607 containerd[1475]: time="2025-05-09T00:29:39.417220254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7z6wr,Uid:035e105a-b89a-4204-9406-d96aaeb0e048,Namespace:kube-system,Attempt:0,}" May 9 00:29:39.417801 containerd[1475]: time="2025-05-09T00:29:39.417757656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f6f995b9-qzmd8,Uid:8e3a567d-acd2-4bb0-bcdc-d60762255110,Namespace:calico-apiserver,Attempt:0,}" May 9 00:29:39.421690 containerd[1475]: time="2025-05-09T00:29:39.421615039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78559bdf4b-jb9rh,Uid:c912a440-377c-4394-b47f-ba521b174b03,Namespace:calico-system,Attempt:0,}" May 9 00:29:39.422570 kubelet[2506]: E0509 00:29:39.422531 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:39.423054 containerd[1475]: time="2025-05-09T00:29:39.422967917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7htnm,Uid:75434938-61cf-41bc-bc17-24399a2f1b29,Namespace:kube-system,Attempt:0,}" May 9 00:29:39.562396 containerd[1475]: time="2025-05-09T00:29:39.562285021Z" level=error msg="Failed to destroy network for sandbox \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.564686 containerd[1475]: time="2025-05-09T00:29:39.564373734Z" level=error msg="encountered an error cleaning up failed sandbox \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.564686 containerd[1475]: time="2025-05-09T00:29:39.564459796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f6f995b9-66cpx,Uid:fcb70e53-0375-414e-8457-33f88270eeb6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.566261 containerd[1475]: time="2025-05-09T00:29:39.566205824Z" level=error msg="Failed to destroy network for sandbox \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.566802 containerd[1475]: time="2025-05-09T00:29:39.566767211Z" level=error msg="encountered an error cleaning up failed sandbox \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.566870 containerd[1475]: time="2025-05-09T00:29:39.566851600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7z6wr,Uid:035e105a-b89a-4204-9406-d96aaeb0e048,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.567368 kubelet[2506]: E0509 00:29:39.567168 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.567368 kubelet[2506]: E0509 00:29:39.567184 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.567368 kubelet[2506]: E0509 00:29:39.567242 2506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7z6wr" May 9 00:29:39.567368 kubelet[2506]: E0509 00:29:39.567267 2506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f6f995b9-66cpx" May 9 00:29:39.568039 containerd[1475]: time="2025-05-09T00:29:39.567992729Z" level=error msg="Failed to destroy network for sandbox \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.568477 containerd[1475]: time="2025-05-09T00:29:39.568451372Z" level=error msg="encountered an error cleaning up failed sandbox \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.568584 containerd[1475]: time="2025-05-09T00:29:39.568562512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f6f995b9-qzmd8,Uid:8e3a567d-acd2-4bb0-bcdc-d60762255110,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.568931 kubelet[2506]: E0509 00:29:39.568848 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.568931 kubelet[2506]: E0509 00:29:39.568901 2506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f6f995b9-qzmd8" May 9 00:29:39.573280 kubelet[2506]: E0509 00:29:39.573242 2506 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f6f995b9-qzmd8" May 9 00:29:39.573402 kubelet[2506]: E0509 00:29:39.573267 2506 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7z6wr" May 9 00:29:39.573402 kubelet[2506]: E0509 00:29:39.573343 2506 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f6f995b9-66cpx" May 9 00:29:39.574409 kubelet[2506]: E0509 00:29:39.573938 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54f6f995b9-66cpx_calico-apiserver(fcb70e53-0375-414e-8457-33f88270eeb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54f6f995b9-66cpx_calico-apiserver(fcb70e53-0375-414e-8457-33f88270eeb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54f6f995b9-66cpx" podUID="fcb70e53-0375-414e-8457-33f88270eeb6" May 9 00:29:39.574409 kubelet[2506]: E0509 00:29:39.574016 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7z6wr_kube-system(035e105a-b89a-4204-9406-d96aaeb0e048)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7z6wr_kube-system(035e105a-b89a-4204-9406-d96aaeb0e048)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7z6wr" podUID="035e105a-b89a-4204-9406-d96aaeb0e048" May 9 00:29:39.574530 kubelet[2506]: E0509 00:29:39.574499 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54f6f995b9-qzmd8_calico-apiserver(8e3a567d-acd2-4bb0-bcdc-d60762255110)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54f6f995b9-qzmd8_calico-apiserver(8e3a567d-acd2-4bb0-bcdc-d60762255110)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54f6f995b9-qzmd8" podUID="8e3a567d-acd2-4bb0-bcdc-d60762255110" May 9 00:29:39.578704 containerd[1475]: time="2025-05-09T00:29:39.578646519Z" level=error msg="Failed to destroy network for sandbox \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.579250 containerd[1475]: time="2025-05-09T00:29:39.579184542Z" level=error msg="encountered an error cleaning up failed sandbox \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.579587 containerd[1475]: time="2025-05-09T00:29:39.579262819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7htnm,Uid:75434938-61cf-41bc-bc17-24399a2f1b29,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.579826 kubelet[2506]: E0509 00:29:39.579790 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.579902 kubelet[2506]: E0509 00:29:39.579846 2506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7htnm" May 9 00:29:39.579902 kubelet[2506]: E0509 00:29:39.579872 2506 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7htnm" May 9 00:29:39.580008 kubelet[2506]: E0509 00:29:39.579973 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7htnm_kube-system(75434938-61cf-41bc-bc17-24399a2f1b29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7htnm_kube-system(75434938-61cf-41bc-bc17-24399a2f1b29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7htnm" podUID="75434938-61cf-41bc-bc17-24399a2f1b29" May 9 00:29:39.580468 containerd[1475]: time="2025-05-09T00:29:39.580300133Z" level=error msg="Failed to destroy network for sandbox \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.580758 containerd[1475]: time="2025-05-09T00:29:39.580731996Z" level=error msg="encountered an error cleaning up failed sandbox \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.580825 containerd[1475]: time="2025-05-09T00:29:39.580769757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78559bdf4b-jb9rh,Uid:c912a440-377c-4394-b47f-ba521b174b03,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.580988 kubelet[2506]: E0509 00:29:39.580950 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:39.581042 kubelet[2506]: E0509 00:29:39.581018 2506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78559bdf4b-jb9rh" May 9 00:29:39.581071 kubelet[2506]: E0509 00:29:39.581043 2506 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78559bdf4b-jb9rh" May 9 00:29:39.581135 kubelet[2506]: E0509 00:29:39.581087 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78559bdf4b-jb9rh_calico-system(c912a440-377c-4394-b47f-ba521b174b03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78559bdf4b-jb9rh_calico-system(c912a440-377c-4394-b47f-ba521b174b03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78559bdf4b-jb9rh" podUID="c912a440-377c-4394-b47f-ba521b174b03" May 9 00:29:40.338802 systemd[1]: Created slice kubepods-besteffort-pod7888b668_be66_4298_b746_119a722815e9.slice - libcontainer container kubepods-besteffort-pod7888b668_be66_4298_b746_119a722815e9.slice. May 9 00:29:40.343379 containerd[1475]: time="2025-05-09T00:29:40.343331618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vj68l,Uid:7888b668-be66-4298-b746-119a722815e9,Namespace:calico-system,Attempt:0,}" May 9 00:29:40.404641 kubelet[2506]: I0509 00:29:40.404312 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" May 9 00:29:40.405285 containerd[1475]: time="2025-05-09T00:29:40.405219882Z" level=info msg="StopPodSandbox for \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\"" May 9 00:29:40.405662 containerd[1475]: time="2025-05-09T00:29:40.405519416Z" level=info msg="Ensure that sandbox 41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9 in task-service has been cleanup successfully" May 9 00:29:40.406405 kubelet[2506]: I0509 00:29:40.406388 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" May 9 00:29:40.407077 containerd[1475]: time="2025-05-09T00:29:40.407044035Z" level=info msg="StopPodSandbox for \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\"" May 9 00:29:40.407834 containerd[1475]: time="2025-05-09T00:29:40.407807012Z" level=info msg="Ensure that sandbox a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4 in task-service has been cleanup successfully" May 9 00:29:40.409998 kubelet[2506]: I0509 00:29:40.409474 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" May 9 00:29:40.410164 containerd[1475]: time="2025-05-09T00:29:40.410083387Z" level=info msg="StopPodSandbox for \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\"" May 9 00:29:40.410287 containerd[1475]: time="2025-05-09T00:29:40.410256633Z" level=info msg="Ensure that sandbox d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3 in task-service has been cleanup successfully" May 9 00:29:40.413327 containerd[1475]: time="2025-05-09T00:29:40.412904929Z" level=error msg="Failed to destroy network for sandbox \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:40.414247 containerd[1475]: time="2025-05-09T00:29:40.414187674Z" level=error msg="encountered an error cleaning up failed sandbox \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:40.414306 containerd[1475]: time="2025-05-09T00:29:40.414246203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vj68l,Uid:7888b668-be66-4298-b746-119a722815e9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:40.415074 kubelet[2506]: I0509 00:29:40.414578 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" May 9 00:29:40.415228 containerd[1475]: time="2025-05-09T00:29:40.415197785Z" level=info msg="StopPodSandbox for \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\"" May 9 00:29:40.415402 containerd[1475]: time="2025-05-09T00:29:40.415371973Z" level=info msg="Ensure that sandbox f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce in task-service has been cleanup successfully" May 9 00:29:40.415664 kubelet[2506]: E0509 00:29:40.414428 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:40.415739 kubelet[2506]: E0509 00:29:40.415692 2506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vj68l" May 9 00:29:40.415739 kubelet[2506]: E0509 00:29:40.415717 2506 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vj68l" May 9 00:29:40.415810 kubelet[2506]: E0509 00:29:40.415751 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vj68l_calico-system(7888b668-be66-4298-b746-119a722815e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vj68l_calico-system(7888b668-be66-4298-b746-119a722815e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vj68l" podUID="7888b668-be66-4298-b746-119a722815e9" May 9 00:29:40.419225 kubelet[2506]: I0509 00:29:40.419181 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" May 9 00:29:40.420198 containerd[1475]: time="2025-05-09T00:29:40.420157401Z" level=info msg="StopPodSandbox for \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\"" May 9 00:29:40.420795 containerd[1475]: time="2025-05-09T00:29:40.420703079Z" level=info msg="Ensure that sandbox dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086 in task-service has been cleanup successfully" May 9 00:29:40.451344 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9-shm.mount: Deactivated successfully. May 9 00:29:40.451461 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4-shm.mount: Deactivated successfully. May 9 00:29:40.451539 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086-shm.mount: Deactivated successfully. May 9 00:29:40.451614 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce-shm.mount: Deactivated successfully. May 9 00:29:40.451706 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3-shm.mount: Deactivated successfully. May 9 00:29:40.454283 containerd[1475]: time="2025-05-09T00:29:40.454162005Z" level=error msg="StopPodSandbox for \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\" failed" error="failed to destroy network for sandbox \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:40.454726 kubelet[2506]: E0509 00:29:40.454520 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" May 9 00:29:40.454726 kubelet[2506]: E0509 00:29:40.454590 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3"} May 9 00:29:40.454726 kubelet[2506]: E0509 00:29:40.454660 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"035e105a-b89a-4204-9406-d96aaeb0e048\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:29:40.454726 kubelet[2506]: E0509 00:29:40.454685 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"035e105a-b89a-4204-9406-d96aaeb0e048\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7z6wr" podUID="035e105a-b89a-4204-9406-d96aaeb0e048" May 9 00:29:40.458625 containerd[1475]: time="2025-05-09T00:29:40.458563060Z" level=error msg="StopPodSandbox for \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\" failed" error="failed to destroy network for sandbox \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:40.459064 kubelet[2506]: E0509 00:29:40.458924 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" May 9 00:29:40.459064 kubelet[2506]: E0509 00:29:40.458976 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9"} May 9 00:29:40.459064 kubelet[2506]: E0509 00:29:40.459009 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"75434938-61cf-41bc-bc17-24399a2f1b29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:29:40.459064 kubelet[2506]: E0509 00:29:40.459032 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"75434938-61cf-41bc-bc17-24399a2f1b29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7htnm" podUID="75434938-61cf-41bc-bc17-24399a2f1b29" May 9 00:29:40.462560 containerd[1475]: time="2025-05-09T00:29:40.462510279Z" level=error msg="StopPodSandbox for \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\" failed" error="failed to destroy network for sandbox \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:40.462982 kubelet[2506]: E0509 00:29:40.462900 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" May 9 00:29:40.463065 kubelet[2506]: E0509 00:29:40.462992 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4"} May 9 00:29:40.463065 kubelet[2506]: E0509 00:29:40.463044 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c912a440-377c-4394-b47f-ba521b174b03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:29:40.463158 kubelet[2506]: E0509 00:29:40.463079 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c912a440-377c-4394-b47f-ba521b174b03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78559bdf4b-jb9rh" podUID="c912a440-377c-4394-b47f-ba521b174b03" May 9 00:29:40.471560 containerd[1475]: time="2025-05-09T00:29:40.471488421Z" level=error msg="StopPodSandbox for \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\" failed" error="failed to destroy network for sandbox \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:40.471806 kubelet[2506]: E0509 00:29:40.471753 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" May 9 00:29:40.471871 kubelet[2506]: E0509 00:29:40.471816 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce"} May 9 00:29:40.471871 kubelet[2506]: E0509 00:29:40.471853 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fcb70e53-0375-414e-8457-33f88270eeb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:29:40.471975 kubelet[2506]: E0509 00:29:40.471897 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fcb70e53-0375-414e-8457-33f88270eeb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54f6f995b9-66cpx" podUID="fcb70e53-0375-414e-8457-33f88270eeb6" May 9 00:29:40.473184 containerd[1475]: time="2025-05-09T00:29:40.473145832Z" level=error msg="StopPodSandbox for \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\" failed" error="failed to destroy network for sandbox \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:40.473372 kubelet[2506]: E0509 00:29:40.473332 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" May 9 00:29:40.473452 kubelet[2506]: E0509 00:29:40.473392 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086"} May 9 00:29:40.473452 kubelet[2506]: E0509 00:29:40.473431 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e3a567d-acd2-4bb0-bcdc-d60762255110\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:29:40.473548 kubelet[2506]: E0509 00:29:40.473456 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e3a567d-acd2-4bb0-bcdc-d60762255110\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54f6f995b9-qzmd8" podUID="8e3a567d-acd2-4bb0-bcdc-d60762255110" May 9 00:29:41.421764 kubelet[2506]: I0509 00:29:41.421708 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" May 9 00:29:41.422647 containerd[1475]: time="2025-05-09T00:29:41.422581157Z" level=info msg="StopPodSandbox for \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\"" May 9 00:29:41.423037 containerd[1475]: time="2025-05-09T00:29:41.422862948Z" level=info msg="Ensure that sandbox f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe in task-service has been cleanup successfully" May 9 00:29:41.462263 containerd[1475]: time="2025-05-09T00:29:41.462194021Z" level=error msg="StopPodSandbox for \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\" failed" error="failed to destroy network for sandbox \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 00:29:41.462557 kubelet[2506]: E0509 00:29:41.462504 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" May 9 00:29:41.462612 kubelet[2506]: E0509 00:29:41.462571 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe"} May 9 00:29:41.462647 kubelet[2506]: E0509 00:29:41.462613 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7888b668-be66-4298-b746-119a722815e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 9 00:29:41.462725 kubelet[2506]: E0509 00:29:41.462664 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7888b668-be66-4298-b746-119a722815e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vj68l" podUID="7888b668-be66-4298-b746-119a722815e9" May 9 00:29:42.421523 systemd[1]: Started sshd@7-10.0.0.48:22-10.0.0.1:54104.service - OpenSSH per-connection server daemon (10.0.0.1:54104). May 9 00:29:42.470234 sshd[3747]: Accepted publickey for core from 10.0.0.1 port 54104 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:29:42.472381 sshd[3747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:29:42.479516 systemd-logind[1463]: New session 8 of user core. May 9 00:29:42.492119 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 00:29:42.628936 sshd[3747]: pam_unix(sshd:session): session closed for user core May 9 00:29:42.633637 systemd[1]: sshd@7-10.0.0.48:22-10.0.0.1:54104.service: Deactivated successfully. May 9 00:29:42.636096 systemd[1]: session-8.scope: Deactivated successfully. May 9 00:29:42.636742 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. May 9 00:29:42.637648 systemd-logind[1463]: Removed session 8. May 9 00:29:44.958690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640470677.mount: Deactivated successfully. May 9 00:29:45.634403 containerd[1475]: time="2025-05-09T00:29:45.634321732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:45.635223 containerd[1475]: time="2025-05-09T00:29:45.635131475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 9 00:29:45.636363 containerd[1475]: time="2025-05-09T00:29:45.636321323Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:45.654393 containerd[1475]: time="2025-05-09T00:29:45.654342113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:45.654902 containerd[1475]: time="2025-05-09T00:29:45.654842314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 6.244592865s" May 9 00:29:45.654972 containerd[1475]: time="2025-05-09T00:29:45.654905372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 9 00:29:45.662772 containerd[1475]: time="2025-05-09T00:29:45.662727233Z" level=info msg="CreateContainer within sandbox \"3ff1f024e50a9e4966b1d6c88336ac0e06e9855849373c6ac5660d5cdb745569\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 9 00:29:45.701699 containerd[1475]: time="2025-05-09T00:29:45.700181149Z" level=info msg="CreateContainer within sandbox \"3ff1f024e50a9e4966b1d6c88336ac0e06e9855849373c6ac5660d5cdb745569\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fdaf867b9cedc4cac6888d30e25570e7ce19a8a44db52efdb91f21da97c457b2\"" May 9 00:29:45.701699 containerd[1475]: time="2025-05-09T00:29:45.701590238Z" level=info msg="StartContainer for \"fdaf867b9cedc4cac6888d30e25570e7ce19a8a44db52efdb91f21da97c457b2\"" May 9 00:29:45.842934 systemd[1]: Started cri-containerd-fdaf867b9cedc4cac6888d30e25570e7ce19a8a44db52efdb91f21da97c457b2.scope - libcontainer container fdaf867b9cedc4cac6888d30e25570e7ce19a8a44db52efdb91f21da97c457b2. May 9 00:29:46.187859 containerd[1475]: time="2025-05-09T00:29:46.187799154Z" level=info msg="StartContainer for \"fdaf867b9cedc4cac6888d30e25570e7ce19a8a44db52efdb91f21da97c457b2\" returns successfully" May 9 00:29:46.217909 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 9 00:29:46.218509 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 9 00:29:46.440355 kubelet[2506]: E0509 00:29:46.440200 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:46.463828 kubelet[2506]: I0509 00:29:46.463727 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-slqmz" podStartSLOduration=1.9687766309999999 podStartE2EDuration="20.463701014s" podCreationTimestamp="2025-05-09 00:29:26 +0000 UTC" firstStartedPulling="2025-05-09 00:29:27.160764262 +0000 UTC m=+12.916427528" lastFinishedPulling="2025-05-09 00:29:45.655688645 +0000 UTC m=+31.411351911" observedRunningTime="2025-05-09 00:29:46.463524583 +0000 UTC m=+32.219187859" watchObservedRunningTime="2025-05-09 00:29:46.463701014 +0000 UTC m=+32.219364291" May 9 00:29:47.642011 systemd[1]: Started sshd@8-10.0.0.48:22-10.0.0.1:54964.service - OpenSSH per-connection server daemon (10.0.0.1:54964). May 9 00:29:47.676093 sshd[3934]: Accepted publickey for core from 10.0.0.1 port 54964 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:29:47.677672 sshd[3934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:29:47.681506 systemd-logind[1463]: New session 9 of user core. May 9 00:29:47.691006 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 00:29:47.812805 sshd[3934]: pam_unix(sshd:session): session closed for user core May 9 00:29:47.816987 systemd[1]: sshd@8-10.0.0.48:22-10.0.0.1:54964.service: Deactivated successfully. May 9 00:29:47.819206 systemd[1]: session-9.scope: Deactivated successfully. May 9 00:29:47.819854 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. May 9 00:29:47.820831 systemd-logind[1463]: Removed session 9. May 9 00:29:51.342916 containerd[1475]: time="2025-05-09T00:29:51.341039941Z" level=info msg="StopPodSandbox for \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\"" May 9 00:29:51.526505 containerd[1475]: 2025-05-09 00:29:51.448 [INFO][4037] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" May 9 00:29:51.526505 containerd[1475]: 2025-05-09 00:29:51.449 [INFO][4037] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" iface="eth0" netns="/var/run/netns/cni-91bee64b-d784-564e-d0a9-169f8fe3ed25" May 9 00:29:51.526505 containerd[1475]: 2025-05-09 00:29:51.449 [INFO][4037] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" iface="eth0" netns="/var/run/netns/cni-91bee64b-d784-564e-d0a9-169f8fe3ed25" May 9 00:29:51.526505 containerd[1475]: 2025-05-09 00:29:51.450 [INFO][4037] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" iface="eth0" netns="/var/run/netns/cni-91bee64b-d784-564e-d0a9-169f8fe3ed25" May 9 00:29:51.526505 containerd[1475]: 2025-05-09 00:29:51.450 [INFO][4037] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" May 9 00:29:51.526505 containerd[1475]: 2025-05-09 00:29:51.451 [INFO][4037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" May 9 00:29:51.526505 containerd[1475]: 2025-05-09 00:29:51.509 [INFO][4046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" HandleID="k8s-pod-network.a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" Workload="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:29:51.526505 containerd[1475]: 2025-05-09 00:29:51.510 [INFO][4046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:29:51.526505 containerd[1475]: 2025-05-09 00:29:51.510 [INFO][4046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:29:51.526505 containerd[1475]: 2025-05-09 00:29:51.518 [WARNING][4046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" HandleID="k8s-pod-network.a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" Workload="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:29:51.526505 containerd[1475]: 2025-05-09 00:29:51.518 [INFO][4046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" HandleID="k8s-pod-network.a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" Workload="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:29:51.526505 containerd[1475]: 2025-05-09 00:29:51.520 [INFO][4046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:29:51.526505 containerd[1475]: 2025-05-09 00:29:51.523 [INFO][4037] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" May 9 00:29:51.527213 containerd[1475]: time="2025-05-09T00:29:51.526671310Z" level=info msg="TearDown network for sandbox \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\" successfully" May 9 00:29:51.527213 containerd[1475]: time="2025-05-09T00:29:51.526699273Z" level=info msg="StopPodSandbox for \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\" returns successfully" May 9 00:29:51.527544 containerd[1475]: time="2025-05-09T00:29:51.527513353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78559bdf4b-jb9rh,Uid:c912a440-377c-4394-b47f-ba521b174b03,Namespace:calico-system,Attempt:1,}" May 9 00:29:51.530626 systemd[1]: run-netns-cni\x2d91bee64b\x2dd784\x2d564e\x2dd0a9\x2d169f8fe3ed25.mount: Deactivated successfully. May 9 00:29:51.656813 systemd-networkd[1412]: cali6f74f2174f0: Link UP May 9 00:29:51.657510 systemd-networkd[1412]: cali6f74f2174f0: Gained carrier May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.566 [INFO][4056] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.577 [INFO][4056] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0 calico-kube-controllers-78559bdf4b- calico-system c912a440-377c-4394-b47f-ba521b174b03 815 0 2025-05-09 00:29:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78559bdf4b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78559bdf4b-jb9rh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6f74f2174f0 [] []}} ContainerID="ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" Namespace="calico-system" Pod="calico-kube-controllers-78559bdf4b-jb9rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-" May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.577 [INFO][4056] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" Namespace="calico-system" Pod="calico-kube-controllers-78559bdf4b-jb9rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.609 [INFO][4071] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" HandleID="k8s-pod-network.ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" Workload="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.618 [INFO][4071] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" HandleID="k8s-pod-network.ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" Workload="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dce80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78559bdf4b-jb9rh", "timestamp":"2025-05-09 00:29:51.609499708 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.618 [INFO][4071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.618 [INFO][4071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.618 [INFO][4071] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.620 [INFO][4071] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" host="localhost" May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.624 [INFO][4071] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.628 [INFO][4071] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.630 [INFO][4071] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.633 [INFO][4071] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.633 [INFO][4071] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" host="localhost" May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.634 [INFO][4071] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.640 [INFO][4071] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" host="localhost" May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.646 [INFO][4071] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" host="localhost" May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.646 [INFO][4071] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" host="localhost" May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.646 [INFO][4071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:29:51.669527 containerd[1475]: 2025-05-09 00:29:51.646 [INFO][4071] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" HandleID="k8s-pod-network.ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" Workload="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:29:51.670467 containerd[1475]: 2025-05-09 00:29:51.649 [INFO][4056] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" Namespace="calico-system" Pod="calico-kube-controllers-78559bdf4b-jb9rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0", GenerateName:"calico-kube-controllers-78559bdf4b-", Namespace:"calico-system", SelfLink:"", UID:"c912a440-377c-4394-b47f-ba521b174b03", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78559bdf4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78559bdf4b-jb9rh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f74f2174f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:29:51.670467 containerd[1475]: 2025-05-09 00:29:51.649 [INFO][4056] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" Namespace="calico-system" Pod="calico-kube-controllers-78559bdf4b-jb9rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:29:51.670467 containerd[1475]: 2025-05-09 00:29:51.649 [INFO][4056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f74f2174f0 ContainerID="ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" Namespace="calico-system" Pod="calico-kube-controllers-78559bdf4b-jb9rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:29:51.670467 containerd[1475]: 2025-05-09 00:29:51.657 [INFO][4056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" Namespace="calico-system" Pod="calico-kube-controllers-78559bdf4b-jb9rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:29:51.670467 containerd[1475]: 2025-05-09 00:29:51.658 [INFO][4056] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" Namespace="calico-system" Pod="calico-kube-controllers-78559bdf4b-jb9rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0", GenerateName:"calico-kube-controllers-78559bdf4b-", Namespace:"calico-system", SelfLink:"", UID:"c912a440-377c-4394-b47f-ba521b174b03", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78559bdf4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f", Pod:"calico-kube-controllers-78559bdf4b-jb9rh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f74f2174f0", MAC:"5a:94:80:da:92:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:29:51.670467 containerd[1475]: 2025-05-09 00:29:51.665 [INFO][4056] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f" Namespace="calico-system" Pod="calico-kube-controllers-78559bdf4b-jb9rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:29:51.712578 containerd[1475]: time="2025-05-09T00:29:51.712452230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:29:51.713805 containerd[1475]: time="2025-05-09T00:29:51.713381155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:29:51.713805 containerd[1475]: time="2025-05-09T00:29:51.713488047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:51.713805 containerd[1475]: time="2025-05-09T00:29:51.713722597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:51.742024 systemd[1]: Started cri-containerd-ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f.scope - libcontainer container ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f. May 9 00:29:51.754949 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:29:51.780955 containerd[1475]: time="2025-05-09T00:29:51.780905126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78559bdf4b-jb9rh,Uid:c912a440-377c-4394-b47f-ba521b174b03,Namespace:calico-system,Attempt:1,} returns sandbox id \"ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f\"" May 9 00:29:51.783146 containerd[1475]: time="2025-05-09T00:29:51.783058513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 9 00:29:51.944261 kubelet[2506]: I0509 00:29:51.944086 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:29:51.944848 kubelet[2506]: E0509 00:29:51.944518 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:52.332874 containerd[1475]: time="2025-05-09T00:29:52.332269685Z" level=info msg="StopPodSandbox for \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\"" May 9 00:29:52.463083 kubelet[2506]: E0509 00:29:52.463046 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:52.612913 containerd[1475]: 2025-05-09 00:29:52.555 [INFO][4176] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" May 9 00:29:52.612913 containerd[1475]: 2025-05-09 00:29:52.556 [INFO][4176] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" iface="eth0" netns="/var/run/netns/cni-429887f5-ee3d-e80c-ecdf-170dc009482b" May 9 00:29:52.612913 containerd[1475]: 2025-05-09 00:29:52.556 [INFO][4176] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" iface="eth0" netns="/var/run/netns/cni-429887f5-ee3d-e80c-ecdf-170dc009482b" May 9 00:29:52.612913 containerd[1475]: 2025-05-09 00:29:52.556 [INFO][4176] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" iface="eth0" netns="/var/run/netns/cni-429887f5-ee3d-e80c-ecdf-170dc009482b" May 9 00:29:52.612913 containerd[1475]: 2025-05-09 00:29:52.556 [INFO][4176] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" May 9 00:29:52.612913 containerd[1475]: 2025-05-09 00:29:52.556 [INFO][4176] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" May 9 00:29:52.612913 containerd[1475]: 2025-05-09 00:29:52.601 [INFO][4187] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" HandleID="k8s-pod-network.f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" Workload="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:29:52.612913 containerd[1475]: 2025-05-09 00:29:52.601 [INFO][4187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:29:52.612913 containerd[1475]: 2025-05-09 00:29:52.601 [INFO][4187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:29:52.612913 containerd[1475]: 2025-05-09 00:29:52.606 [WARNING][4187] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" HandleID="k8s-pod-network.f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" Workload="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:29:52.612913 containerd[1475]: 2025-05-09 00:29:52.606 [INFO][4187] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" HandleID="k8s-pod-network.f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" Workload="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:29:52.612913 containerd[1475]: 2025-05-09 00:29:52.608 [INFO][4187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:29:52.612913 containerd[1475]: 2025-05-09 00:29:52.610 [INFO][4176] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" May 9 00:29:52.614121 containerd[1475]: time="2025-05-09T00:29:52.613954824Z" level=info msg="TearDown network for sandbox \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\" successfully" May 9 00:29:52.614121 containerd[1475]: time="2025-05-09T00:29:52.614000099Z" level=info msg="StopPodSandbox for \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\" returns successfully" May 9 00:29:52.615023 containerd[1475]: time="2025-05-09T00:29:52.614982655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f6f995b9-66cpx,Uid:fcb70e53-0375-414e-8457-33f88270eeb6,Namespace:calico-apiserver,Attempt:1,}" May 9 00:29:52.616134 systemd[1]: run-netns-cni\x2d429887f5\x2dee3d\x2de80c\x2decdf\x2d170dc009482b.mount: Deactivated successfully. May 9 00:29:52.825144 systemd[1]: Started sshd@9-10.0.0.48:22-10.0.0.1:54974.service - OpenSSH per-connection server daemon (10.0.0.1:54974). May 9 00:29:52.849992 kernel: bpftool[4227]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 9 00:29:52.879946 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 54974 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:29:52.881172 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:29:52.887594 systemd-logind[1463]: New session 10 of user core. May 9 00:29:52.894126 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 00:29:52.922986 systemd-networkd[1412]: cali07b00bf08c9: Link UP May 9 00:29:52.923425 systemd-networkd[1412]: cali07b00bf08c9: Gained carrier May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.842 [INFO][4201] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0 calico-apiserver-54f6f995b9- calico-apiserver fcb70e53-0375-414e-8457-33f88270eeb6 835 0 2025-05-09 00:29:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54f6f995b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54f6f995b9-66cpx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali07b00bf08c9 [] []}} ContainerID="98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-66cpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-" May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.843 [INFO][4201] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-66cpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.874 [INFO][4229] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" HandleID="k8s-pod-network.98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" Workload="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.883 [INFO][4229] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" HandleID="k8s-pod-network.98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" Workload="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003acbf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54f6f995b9-66cpx", "timestamp":"2025-05-09 00:29:52.874507499 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.883 [INFO][4229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.883 [INFO][4229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.883 [INFO][4229] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.885 [INFO][4229] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" host="localhost" May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.889 [INFO][4229] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.893 [INFO][4229] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.895 [INFO][4229] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.899 [INFO][4229] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.900 [INFO][4229] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" host="localhost" May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.901 [INFO][4229] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.906 [INFO][4229] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" host="localhost" May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.913 [INFO][4229] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" host="localhost" May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.913 [INFO][4229] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" host="localhost" May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.913 [INFO][4229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:29:52.939025 containerd[1475]: 2025-05-09 00:29:52.913 [INFO][4229] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" HandleID="k8s-pod-network.98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" Workload="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:29:52.939583 containerd[1475]: 2025-05-09 00:29:52.918 [INFO][4201] cni-plugin/k8s.go 386: Populated endpoint ContainerID="98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-66cpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0", GenerateName:"calico-apiserver-54f6f995b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcb70e53-0375-414e-8457-33f88270eeb6", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f6f995b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54f6f995b9-66cpx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07b00bf08c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:29:52.939583 containerd[1475]: 2025-05-09 00:29:52.919 [INFO][4201] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-66cpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:29:52.939583 containerd[1475]: 2025-05-09 00:29:52.919 [INFO][4201] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07b00bf08c9 ContainerID="98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-66cpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:29:52.939583 containerd[1475]: 2025-05-09 00:29:52.923 [INFO][4201] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-66cpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:29:52.939583 containerd[1475]: 2025-05-09 00:29:52.923 [INFO][4201] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-66cpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0", GenerateName:"calico-apiserver-54f6f995b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcb70e53-0375-414e-8457-33f88270eeb6", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f6f995b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f", Pod:"calico-apiserver-54f6f995b9-66cpx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07b00bf08c9", MAC:"a2:39:d2:46:84:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:29:52.939583 containerd[1475]: 2025-05-09 00:29:52.935 [INFO][4201] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-66cpx" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:29:52.985633 containerd[1475]: time="2025-05-09T00:29:52.985289651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:29:52.985633 containerd[1475]: time="2025-05-09T00:29:52.985356035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:29:52.985633 containerd[1475]: time="2025-05-09T00:29:52.985388306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:52.986329 containerd[1475]: time="2025-05-09T00:29:52.985966041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:53.020206 systemd[1]: Started cri-containerd-98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f.scope - libcontainer container 98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f. May 9 00:29:53.064831 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:29:53.099429 sshd[4216]: pam_unix(sshd:session): session closed for user core May 9 00:29:53.109277 systemd[1]: sshd@9-10.0.0.48:22-10.0.0.1:54974.service: Deactivated successfully. May 9 00:29:53.114856 systemd[1]: session-10.scope: Deactivated successfully. May 9 00:29:53.121346 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. May 9 00:29:53.123283 systemd-logind[1463]: Removed session 10. May 9 00:29:53.126881 containerd[1475]: time="2025-05-09T00:29:53.126761047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f6f995b9-66cpx,Uid:fcb70e53-0375-414e-8457-33f88270eeb6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f\"" May 9 00:29:53.176960 systemd-networkd[1412]: vxlan.calico: Link UP May 9 00:29:53.176971 systemd-networkd[1412]: vxlan.calico: Gained carrier May 9 00:29:53.334819 containerd[1475]: time="2025-05-09T00:29:53.334359599Z" level=info msg="StopPodSandbox for \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\"" May 9 00:29:53.334819 containerd[1475]: time="2025-05-09T00:29:53.334723693Z" level=info msg="StopPodSandbox for \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\"" May 9 00:29:53.364052 systemd-networkd[1412]: cali6f74f2174f0: Gained IPv6LL May 9 00:29:53.612459 kubelet[2506]: I0509 00:29:53.605973 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:29:53.612459 kubelet[2506]: E0509 00:29:53.606526 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:53.842722 containerd[1475]: 2025-05-09 00:29:53.538 [INFO][4417] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" May 9 00:29:53.842722 containerd[1475]: 2025-05-09 00:29:53.538 [INFO][4417] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" iface="eth0" netns="/var/run/netns/cni-6e235d5a-2ebc-38f8-e9a5-14bfefa26b9f" May 9 00:29:53.842722 containerd[1475]: 2025-05-09 00:29:53.538 [INFO][4417] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" iface="eth0" netns="/var/run/netns/cni-6e235d5a-2ebc-38f8-e9a5-14bfefa26b9f" May 9 00:29:53.842722 containerd[1475]: 2025-05-09 00:29:53.540 [INFO][4417] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" iface="eth0" netns="/var/run/netns/cni-6e235d5a-2ebc-38f8-e9a5-14bfefa26b9f" May 9 00:29:53.842722 containerd[1475]: 2025-05-09 00:29:53.541 [INFO][4417] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" May 9 00:29:53.842722 containerd[1475]: 2025-05-09 00:29:53.541 [INFO][4417] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" May 9 00:29:53.842722 containerd[1475]: 2025-05-09 00:29:53.795 [INFO][4436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" HandleID="k8s-pod-network.d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" Workload="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:29:53.842722 containerd[1475]: 2025-05-09 00:29:53.797 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:29:53.842722 containerd[1475]: 2025-05-09 00:29:53.797 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:29:53.842722 containerd[1475]: 2025-05-09 00:29:53.812 [WARNING][4436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" HandleID="k8s-pod-network.d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" Workload="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:29:53.842722 containerd[1475]: 2025-05-09 00:29:53.812 [INFO][4436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" HandleID="k8s-pod-network.d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" Workload="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:29:53.842722 containerd[1475]: 2025-05-09 00:29:53.815 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:29:53.842722 containerd[1475]: 2025-05-09 00:29:53.819 [INFO][4417] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" May 9 00:29:53.842310 systemd[1]: run-netns-cni\x2d6e235d5a\x2d2ebc\x2d38f8\x2de9a5\x2d14bfefa26b9f.mount: Deactivated successfully. May 9 00:29:53.851870 containerd[1475]: time="2025-05-09T00:29:53.851068258Z" level=info msg="TearDown network for sandbox \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\" successfully" May 9 00:29:53.851870 containerd[1475]: time="2025-05-09T00:29:53.851136215Z" level=info msg="StopPodSandbox for \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\" returns successfully" May 9 00:29:53.851987 kubelet[2506]: E0509 00:29:53.851802 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:53.866535 containerd[1475]: time="2025-05-09T00:29:53.855292864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7z6wr,Uid:035e105a-b89a-4204-9406-d96aaeb0e048,Namespace:kube-system,Attempt:1,}" May 9 00:29:53.872122 containerd[1475]: 2025-05-09 00:29:53.414 [INFO][4407] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" May 9 00:29:53.872122 containerd[1475]: 2025-05-09 00:29:53.414 [INFO][4407] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" iface="eth0" netns="/var/run/netns/cni-f70d42e9-3eef-a065-7db8-085f4fbba700" May 9 00:29:53.872122 containerd[1475]: 2025-05-09 00:29:53.414 [INFO][4407] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" iface="eth0" netns="/var/run/netns/cni-f70d42e9-3eef-a065-7db8-085f4fbba700" May 9 00:29:53.872122 containerd[1475]: 2025-05-09 00:29:53.415 [INFO][4407] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" iface="eth0" netns="/var/run/netns/cni-f70d42e9-3eef-a065-7db8-085f4fbba700" May 9 00:29:53.872122 containerd[1475]: 2025-05-09 00:29:53.415 [INFO][4407] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" May 9 00:29:53.872122 containerd[1475]: 2025-05-09 00:29:53.415 [INFO][4407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" May 9 00:29:53.872122 containerd[1475]: 2025-05-09 00:29:53.806 [INFO][4429] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" HandleID="k8s-pod-network.41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" Workload="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:29:53.872122 containerd[1475]: 2025-05-09 00:29:53.807 [INFO][4429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:29:53.872122 containerd[1475]: 2025-05-09 00:29:53.816 [INFO][4429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:29:53.872122 containerd[1475]: 2025-05-09 00:29:53.834 [WARNING][4429] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" HandleID="k8s-pod-network.41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" Workload="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:29:53.872122 containerd[1475]: 2025-05-09 00:29:53.834 [INFO][4429] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" HandleID="k8s-pod-network.41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" Workload="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:29:53.872122 containerd[1475]: 2025-05-09 00:29:53.851 [INFO][4429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:29:53.872122 containerd[1475]: 2025-05-09 00:29:53.866 [INFO][4407] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" May 9 00:29:53.872122 containerd[1475]: time="2025-05-09T00:29:53.871591914Z" level=info msg="TearDown network for sandbox \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\" successfully" May 9 00:29:53.872122 containerd[1475]: time="2025-05-09T00:29:53.871626730Z" level=info msg="StopPodSandbox for \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\" returns successfully" May 9 00:29:53.876339 systemd[1]: run-netns-cni\x2df70d42e9\x2d3eef\x2da065\x2d7db8\x2d085f4fbba700.mount: Deactivated successfully. May 9 00:29:53.881045 kubelet[2506]: E0509 00:29:53.875282 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:53.881156 containerd[1475]: time="2025-05-09T00:29:53.879248338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7htnm,Uid:75434938-61cf-41bc-bc17-24399a2f1b29,Namespace:kube-system,Attempt:1,}" May 9 00:29:54.222005 systemd-networkd[1412]: caliee5bf67afd8: Link UP May 9 00:29:54.223823 systemd-networkd[1412]: caliee5bf67afd8: Gained carrier May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.026 [INFO][4482] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0 coredns-6f6b679f8f- kube-system 035e105a-b89a-4204-9406-d96aaeb0e048 850 0 2025-05-09 00:29:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-7z6wr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliee5bf67afd8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" Namespace="kube-system" Pod="coredns-6f6b679f8f-7z6wr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7z6wr-" May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.026 [INFO][4482] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" Namespace="kube-system" Pod="coredns-6f6b679f8f-7z6wr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.091 [INFO][4522] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" HandleID="k8s-pod-network.8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" Workload="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.113 [INFO][4522] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" HandleID="k8s-pod-network.8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" Workload="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000585710), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-7z6wr", "timestamp":"2025-05-09 00:29:54.091801702 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.113 [INFO][4522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.114 [INFO][4522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.114 [INFO][4522] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.122 [INFO][4522] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" host="localhost" May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.134 [INFO][4522] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.155 [INFO][4522] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.161 [INFO][4522] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.169 [INFO][4522] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.170 [INFO][4522] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" host="localhost" May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.174 [INFO][4522] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983 May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.185 [INFO][4522] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" host="localhost" May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.202 [INFO][4522] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" host="localhost" May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.202 [INFO][4522] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" host="localhost" May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.202 [INFO][4522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:29:54.267760 containerd[1475]: 2025-05-09 00:29:54.202 [INFO][4522] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" HandleID="k8s-pod-network.8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" Workload="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:29:54.268591 containerd[1475]: 2025-05-09 00:29:54.213 [INFO][4482] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" Namespace="kube-system" Pod="coredns-6f6b679f8f-7z6wr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"035e105a-b89a-4204-9406-d96aaeb0e048", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-7z6wr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee5bf67afd8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:29:54.268591 containerd[1475]: 2025-05-09 00:29:54.213 [INFO][4482] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" Namespace="kube-system" Pod="coredns-6f6b679f8f-7z6wr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:29:54.268591 containerd[1475]: 2025-05-09 00:29:54.213 [INFO][4482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee5bf67afd8 ContainerID="8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" Namespace="kube-system" Pod="coredns-6f6b679f8f-7z6wr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:29:54.268591 containerd[1475]: 2025-05-09 00:29:54.221 [INFO][4482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" Namespace="kube-system" Pod="coredns-6f6b679f8f-7z6wr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:29:54.268591 containerd[1475]: 2025-05-09 00:29:54.222 [INFO][4482] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" Namespace="kube-system" Pod="coredns-6f6b679f8f-7z6wr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"035e105a-b89a-4204-9406-d96aaeb0e048", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983", Pod:"coredns-6f6b679f8f-7z6wr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee5bf67afd8", MAC:"12:1e:c8:02:42:71", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:29:54.268591 containerd[1475]: 2025-05-09 00:29:54.260 [INFO][4482] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983" Namespace="kube-system" Pod="coredns-6f6b679f8f-7z6wr" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:29:54.313604 containerd[1475]: time="2025-05-09T00:29:54.313451702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:29:54.313604 containerd[1475]: time="2025-05-09T00:29:54.313558733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:29:54.313604 containerd[1475]: time="2025-05-09T00:29:54.313579282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:54.314993 containerd[1475]: time="2025-05-09T00:29:54.314864396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:54.347625 systemd[1]: Started cri-containerd-8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983.scope - libcontainer container 8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983. May 9 00:29:54.367813 systemd-networkd[1412]: calid140413bf93: Link UP May 9 00:29:54.370230 systemd-networkd[1412]: calid140413bf93: Gained carrier May 9 00:29:54.385073 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.054 [INFO][4495] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--7htnm-eth0 coredns-6f6b679f8f- kube-system 75434938-61cf-41bc-bc17-24399a2f1b29 849 0 2025-05-09 00:29:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-7htnm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid140413bf93 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" Namespace="kube-system" Pod="coredns-6f6b679f8f-7htnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7htnm-" May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.054 [INFO][4495] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" Namespace="kube-system" Pod="coredns-6f6b679f8f-7htnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.139 [INFO][4536] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" HandleID="k8s-pod-network.c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" Workload="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.160 [INFO][4536] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" HandleID="k8s-pod-network.c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" Workload="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000264e00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-7htnm", "timestamp":"2025-05-09 00:29:54.139789896 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.160 [INFO][4536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.203 [INFO][4536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.203 [INFO][4536] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.223 [INFO][4536] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" host="localhost" May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.258 [INFO][4536] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.273 [INFO][4536] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.280 [INFO][4536] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.289 [INFO][4536] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.296 [INFO][4536] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" host="localhost" May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.304 [INFO][4536] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.327 [INFO][4536] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" host="localhost" May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.349 [INFO][4536] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" host="localhost" May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.349 [INFO][4536] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" host="localhost" May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.349 [INFO][4536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:29:54.403606 containerd[1475]: 2025-05-09 00:29:54.349 [INFO][4536] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" HandleID="k8s-pod-network.c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" Workload="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:29:54.404445 containerd[1475]: 2025-05-09 00:29:54.360 [INFO][4495] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" Namespace="kube-system" Pod="coredns-6f6b679f8f-7htnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--7htnm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"75434938-61cf-41bc-bc17-24399a2f1b29", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-7htnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid140413bf93", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:29:54.404445 containerd[1475]: 2025-05-09 00:29:54.360 [INFO][4495] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" Namespace="kube-system" Pod="coredns-6f6b679f8f-7htnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:29:54.404445 containerd[1475]: 2025-05-09 00:29:54.360 [INFO][4495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid140413bf93 ContainerID="c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" Namespace="kube-system" Pod="coredns-6f6b679f8f-7htnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:29:54.404445 containerd[1475]: 2025-05-09 00:29:54.371 [INFO][4495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" Namespace="kube-system" Pod="coredns-6f6b679f8f-7htnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:29:54.404445 containerd[1475]: 2025-05-09 00:29:54.372 [INFO][4495] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" Namespace="kube-system" Pod="coredns-6f6b679f8f-7htnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--7htnm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"75434938-61cf-41bc-bc17-24399a2f1b29", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e", Pod:"coredns-6f6b679f8f-7htnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid140413bf93", MAC:"3e:b8:f4:18:0b:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:29:54.404445 containerd[1475]: 2025-05-09 00:29:54.397 [INFO][4495] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e" Namespace="kube-system" Pod="coredns-6f6b679f8f-7htnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:29:54.434590 containerd[1475]: time="2025-05-09T00:29:54.433847941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7z6wr,Uid:035e105a-b89a-4204-9406-d96aaeb0e048,Namespace:kube-system,Attempt:1,} returns sandbox id \"8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983\"" May 9 00:29:54.436619 kubelet[2506]: E0509 00:29:54.436311 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:54.455129 containerd[1475]: time="2025-05-09T00:29:54.455057672Z" level=info msg="CreateContainer within sandbox \"8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:29:54.471445 containerd[1475]: time="2025-05-09T00:29:54.470723630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:29:54.471591 containerd[1475]: time="2025-05-09T00:29:54.471436790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:29:54.471591 containerd[1475]: time="2025-05-09T00:29:54.471501501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:54.471794 containerd[1475]: time="2025-05-09T00:29:54.471687591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:54.503652 systemd[1]: Started cri-containerd-c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e.scope - libcontainer container c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e. May 9 00:29:54.530821 containerd[1475]: time="2025-05-09T00:29:54.527273016Z" level=info msg="CreateContainer within sandbox \"8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"61426ea9716d0c388cde7f0dcd4b6c49ae8f8a132b38463d8eeeb8c6c6577026\"" May 9 00:29:54.531071 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:29:54.532849 containerd[1475]: time="2025-05-09T00:29:54.532519020Z" level=info msg="StartContainer for \"61426ea9716d0c388cde7f0dcd4b6c49ae8f8a132b38463d8eeeb8c6c6577026\"" May 9 00:29:54.579280 systemd[1]: Started cri-containerd-61426ea9716d0c388cde7f0dcd4b6c49ae8f8a132b38463d8eeeb8c6c6577026.scope - libcontainer container 61426ea9716d0c388cde7f0dcd4b6c49ae8f8a132b38463d8eeeb8c6c6577026. May 9 00:29:54.587147 containerd[1475]: time="2025-05-09T00:29:54.587024497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7htnm,Uid:75434938-61cf-41bc-bc17-24399a2f1b29,Namespace:kube-system,Attempt:1,} returns sandbox id \"c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e\"" May 9 00:29:54.588555 kubelet[2506]: E0509 00:29:54.588334 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:54.592458 containerd[1475]: time="2025-05-09T00:29:54.592407799Z" level=info msg="CreateContainer within sandbox \"c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:29:54.624605 containerd[1475]: time="2025-05-09T00:29:54.624517896Z" level=info msg="CreateContainer within sandbox \"c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59a80105b3a49d1ea36b1013352556fb5221f11de8006f8d76532b9d3256142f\"" May 9 00:29:54.626207 containerd[1475]: time="2025-05-09T00:29:54.626034405Z" level=info msg="StartContainer for \"59a80105b3a49d1ea36b1013352556fb5221f11de8006f8d76532b9d3256142f\"" May 9 00:29:54.644818 systemd-networkd[1412]: vxlan.calico: Gained IPv6LL May 9 00:29:54.653427 containerd[1475]: time="2025-05-09T00:29:54.649119278Z" level=info msg="StartContainer for \"61426ea9716d0c388cde7f0dcd4b6c49ae8f8a132b38463d8eeeb8c6c6577026\" returns successfully" May 9 00:29:54.710125 systemd[1]: Started cri-containerd-59a80105b3a49d1ea36b1013352556fb5221f11de8006f8d76532b9d3256142f.scope - libcontainer container 59a80105b3a49d1ea36b1013352556fb5221f11de8006f8d76532b9d3256142f. May 9 00:29:54.773301 containerd[1475]: time="2025-05-09T00:29:54.771581118Z" level=info msg="StartContainer for \"59a80105b3a49d1ea36b1013352556fb5221f11de8006f8d76532b9d3256142f\" returns successfully" May 9 00:29:54.901589 systemd-networkd[1412]: cali07b00bf08c9: Gained IPv6LL May 9 00:29:55.081589 containerd[1475]: time="2025-05-09T00:29:55.071035467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:55.257684 containerd[1475]: time="2025-05-09T00:29:55.257594919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 9 00:29:55.259394 containerd[1475]: time="2025-05-09T00:29:55.259303178Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:55.280686 containerd[1475]: time="2025-05-09T00:29:55.280598998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:55.281508 containerd[1475]: time="2025-05-09T00:29:55.281451469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.498337823s" May 9 00:29:55.281549 containerd[1475]: time="2025-05-09T00:29:55.281509628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 9 00:29:55.282931 containerd[1475]: time="2025-05-09T00:29:55.282868971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 9 00:29:55.290837 containerd[1475]: time="2025-05-09T00:29:55.290799226Z" level=info msg="CreateContainer within sandbox \"ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 9 00:29:55.332909 containerd[1475]: time="2025-05-09T00:29:55.332554504Z" level=info msg="StopPodSandbox for \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\"" May 9 00:29:55.412266 systemd-networkd[1412]: caliee5bf67afd8: Gained IPv6LL May 9 00:29:55.488505 kubelet[2506]: E0509 00:29:55.488408 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:55.494364 kubelet[2506]: E0509 00:29:55.494304 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:55.496822 containerd[1475]: time="2025-05-09T00:29:55.495580373Z" level=info msg="CreateContainer within sandbox \"ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f6b9f9ed757a589ba38f4e674bbbbfa6b6e78abc199e82da3ecabbc0d2063879\"" May 9 00:29:55.497918 containerd[1475]: time="2025-05-09T00:29:55.496872710Z" level=info msg="StartContainer for \"f6b9f9ed757a589ba38f4e674bbbbfa6b6e78abc199e82da3ecabbc0d2063879\"" May 9 00:29:55.540146 kubelet[2506]: I0509 00:29:55.540066 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-7z6wr" podStartSLOduration=34.540038288 podStartE2EDuration="34.540038288s" podCreationTimestamp="2025-05-09 00:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:29:55.518265242 +0000 UTC m=+41.273928519" watchObservedRunningTime="2025-05-09 00:29:55.540038288 +0000 UTC m=+41.295701554" May 9 00:29:55.540376 kubelet[2506]: I0509 00:29:55.540225 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-7htnm" podStartSLOduration=34.54022067 podStartE2EDuration="34.54022067s" podCreationTimestamp="2025-05-09 00:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:29:55.539729749 +0000 UTC m=+41.295393015" watchObservedRunningTime="2025-05-09 00:29:55.54022067 +0000 UTC m=+41.295883937" May 9 00:29:55.551162 systemd[1]: Started cri-containerd-f6b9f9ed757a589ba38f4e674bbbbfa6b6e78abc199e82da3ecabbc0d2063879.scope - libcontainer container f6b9f9ed757a589ba38f4e674bbbbfa6b6e78abc199e82da3ecabbc0d2063879. May 9 00:29:55.557037 containerd[1475]: 2025-05-09 00:29:55.479 [INFO][4783] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" May 9 00:29:55.557037 containerd[1475]: 2025-05-09 00:29:55.479 [INFO][4783] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" iface="eth0" netns="/var/run/netns/cni-b68c9733-bffa-62bb-ff7a-3d9ef83e0bf4" May 9 00:29:55.557037 containerd[1475]: 2025-05-09 00:29:55.480 [INFO][4783] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" iface="eth0" netns="/var/run/netns/cni-b68c9733-bffa-62bb-ff7a-3d9ef83e0bf4" May 9 00:29:55.557037 containerd[1475]: 2025-05-09 00:29:55.480 [INFO][4783] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" iface="eth0" netns="/var/run/netns/cni-b68c9733-bffa-62bb-ff7a-3d9ef83e0bf4" May 9 00:29:55.557037 containerd[1475]: 2025-05-09 00:29:55.480 [INFO][4783] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" May 9 00:29:55.557037 containerd[1475]: 2025-05-09 00:29:55.480 [INFO][4783] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" May 9 00:29:55.557037 containerd[1475]: 2025-05-09 00:29:55.527 [INFO][4792] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" HandleID="k8s-pod-network.f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" Workload="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:29:55.557037 containerd[1475]: 2025-05-09 00:29:55.527 [INFO][4792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:29:55.557037 containerd[1475]: 2025-05-09 00:29:55.527 [INFO][4792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:29:55.557037 containerd[1475]: 2025-05-09 00:29:55.543 [WARNING][4792] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" HandleID="k8s-pod-network.f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" Workload="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:29:55.557037 containerd[1475]: 2025-05-09 00:29:55.543 [INFO][4792] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" HandleID="k8s-pod-network.f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" Workload="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:29:55.557037 containerd[1475]: 2025-05-09 00:29:55.546 [INFO][4792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:29:55.557037 containerd[1475]: 2025-05-09 00:29:55.550 [INFO][4783] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" May 9 00:29:55.558217 containerd[1475]: time="2025-05-09T00:29:55.558023449Z" level=info msg="TearDown network for sandbox \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\" successfully" May 9 00:29:55.558217 containerd[1475]: time="2025-05-09T00:29:55.558096246Z" level=info msg="StopPodSandbox for \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\" returns successfully" May 9 00:29:55.559864 containerd[1475]: time="2025-05-09T00:29:55.559389104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vj68l,Uid:7888b668-be66-4298-b746-119a722815e9,Namespace:calico-system,Attempt:1,}" May 9 00:29:55.616514 containerd[1475]: time="2025-05-09T00:29:55.616346486Z" level=info msg="StartContainer for \"f6b9f9ed757a589ba38f4e674bbbbfa6b6e78abc199e82da3ecabbc0d2063879\" returns successfully" May 9 00:29:55.797494 systemd-networkd[1412]: calibc0d8fb0843: Link UP May 9 00:29:55.797937 systemd-networkd[1412]: calibc0d8fb0843: Gained carrier May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.626 [INFO][4828] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vj68l-eth0 csi-node-driver- calico-system 7888b668-be66-4298-b746-119a722815e9 878 0 2025-05-09 00:29:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vj68l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibc0d8fb0843 [] []}} ContainerID="eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" Namespace="calico-system" Pod="csi-node-driver-vj68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj68l-" May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.626 [INFO][4828] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" Namespace="calico-system" Pod="csi-node-driver-vj68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.658 [INFO][4855] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" HandleID="k8s-pod-network.eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" Workload="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.670 [INFO][4855] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" HandleID="k8s-pod-network.eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" Workload="localhost-k8s-csi--node--driver--vj68l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000444f00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vj68l", "timestamp":"2025-05-09 00:29:55.658530651 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.670 [INFO][4855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.670 [INFO][4855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.670 [INFO][4855] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.672 [INFO][4855] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" host="localhost" May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.766 [INFO][4855] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.771 [INFO][4855] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.773 [INFO][4855] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.775 [INFO][4855] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.775 [INFO][4855] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" host="localhost" May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.776 [INFO][4855] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7 May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.781 [INFO][4855] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" host="localhost" May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.787 [INFO][4855] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" host="localhost" May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.787 [INFO][4855] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" host="localhost" May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.787 [INFO][4855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:29:55.813215 containerd[1475]: 2025-05-09 00:29:55.787 [INFO][4855] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" HandleID="k8s-pod-network.eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" Workload="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:29:55.813817 containerd[1475]: 2025-05-09 00:29:55.793 [INFO][4828] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" Namespace="calico-system" Pod="csi-node-driver-vj68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj68l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vj68l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7888b668-be66-4298-b746-119a722815e9", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vj68l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc0d8fb0843", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:29:55.813817 containerd[1475]: 2025-05-09 00:29:55.794 [INFO][4828] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" Namespace="calico-system" Pod="csi-node-driver-vj68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:29:55.813817 containerd[1475]: 2025-05-09 00:29:55.794 [INFO][4828] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc0d8fb0843 ContainerID="eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" Namespace="calico-system" Pod="csi-node-driver-vj68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:29:55.813817 containerd[1475]: 2025-05-09 00:29:55.798 [INFO][4828] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" Namespace="calico-system" Pod="csi-node-driver-vj68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:29:55.813817 containerd[1475]: 2025-05-09 00:29:55.798 [INFO][4828] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" Namespace="calico-system" Pod="csi-node-driver-vj68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj68l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vj68l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7888b668-be66-4298-b746-119a722815e9", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7", Pod:"csi-node-driver-vj68l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc0d8fb0843", MAC:"1e:9b:28:9a:39:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:29:55.813817 containerd[1475]: 2025-05-09 00:29:55.808 [INFO][4828] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7" Namespace="calico-system" Pod="csi-node-driver-vj68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:29:55.837385 containerd[1475]: time="2025-05-09T00:29:55.837075599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:29:55.837385 containerd[1475]: time="2025-05-09T00:29:55.837144027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:29:55.837385 containerd[1475]: time="2025-05-09T00:29:55.837159366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:55.837385 containerd[1475]: time="2025-05-09T00:29:55.837330076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:55.842021 systemd[1]: run-netns-cni\x2db68c9733\x2dbffa\x2d62bb\x2dff7a\x2d3d9ef83e0bf4.mount: Deactivated successfully. May 9 00:29:55.857587 systemd[1]: run-containerd-runc-k8s.io-eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7-runc.5iPvps.mount: Deactivated successfully. May 9 00:29:55.865038 systemd[1]: Started cri-containerd-eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7.scope - libcontainer container eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7. May 9 00:29:55.877360 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:29:55.889958 containerd[1475]: time="2025-05-09T00:29:55.889919605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vj68l,Uid:7888b668-be66-4298-b746-119a722815e9,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7\"" May 9 00:29:55.925054 systemd-networkd[1412]: calid140413bf93: Gained IPv6LL May 9 00:29:56.335215 containerd[1475]: time="2025-05-09T00:29:56.335056414Z" level=info msg="StopPodSandbox for \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\"" May 9 00:29:56.506260 kubelet[2506]: E0509 00:29:56.506214 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:56.510063 kubelet[2506]: E0509 00:29:56.510012 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:56.538319 kubelet[2506]: I0509 00:29:56.537845 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78559bdf4b-jb9rh" podStartSLOduration=27.037855751 podStartE2EDuration="30.537816153s" podCreationTimestamp="2025-05-09 00:29:26 +0000 UTC" firstStartedPulling="2025-05-09 00:29:51.782623875 +0000 UTC m=+37.538287141" lastFinishedPulling="2025-05-09 00:29:55.282584277 +0000 UTC m=+41.038247543" observedRunningTime="2025-05-09 00:29:56.537405001 +0000 UTC m=+42.293068287" watchObservedRunningTime="2025-05-09 00:29:56.537816153 +0000 UTC m=+42.293479419" May 9 00:29:56.558680 containerd[1475]: 2025-05-09 00:29:56.472 [INFO][4936] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" May 9 00:29:56.558680 containerd[1475]: 2025-05-09 00:29:56.472 [INFO][4936] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" iface="eth0" netns="/var/run/netns/cni-ad8f8fa7-47b7-2a7b-083e-43e6c1d7a3c7" May 9 00:29:56.558680 containerd[1475]: 2025-05-09 00:29:56.473 [INFO][4936] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" iface="eth0" netns="/var/run/netns/cni-ad8f8fa7-47b7-2a7b-083e-43e6c1d7a3c7" May 9 00:29:56.558680 containerd[1475]: 2025-05-09 00:29:56.474 [INFO][4936] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" iface="eth0" netns="/var/run/netns/cni-ad8f8fa7-47b7-2a7b-083e-43e6c1d7a3c7" May 9 00:29:56.558680 containerd[1475]: 2025-05-09 00:29:56.474 [INFO][4936] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" May 9 00:29:56.558680 containerd[1475]: 2025-05-09 00:29:56.474 [INFO][4936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" May 9 00:29:56.558680 containerd[1475]: 2025-05-09 00:29:56.521 [INFO][4944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" HandleID="k8s-pod-network.dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" Workload="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:29:56.558680 containerd[1475]: 2025-05-09 00:29:56.521 [INFO][4944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:29:56.558680 containerd[1475]: 2025-05-09 00:29:56.521 [INFO][4944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:29:56.558680 containerd[1475]: 2025-05-09 00:29:56.545 [WARNING][4944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" HandleID="k8s-pod-network.dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" Workload="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:29:56.558680 containerd[1475]: 2025-05-09 00:29:56.545 [INFO][4944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" HandleID="k8s-pod-network.dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" Workload="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:29:56.558680 containerd[1475]: 2025-05-09 00:29:56.550 [INFO][4944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:29:56.558680 containerd[1475]: 2025-05-09 00:29:56.554 [INFO][4936] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" May 9 00:29:56.564272 systemd[1]: run-netns-cni\x2dad8f8fa7\x2d47b7\x2d2a7b\x2d083e\x2d43e6c1d7a3c7.mount: Deactivated successfully. May 9 00:29:56.565760 containerd[1475]: time="2025-05-09T00:29:56.564989178Z" level=info msg="TearDown network for sandbox \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\" successfully" May 9 00:29:56.565760 containerd[1475]: time="2025-05-09T00:29:56.565033731Z" level=info msg="StopPodSandbox for \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\" returns successfully" May 9 00:29:56.567124 containerd[1475]: time="2025-05-09T00:29:56.567013169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f6f995b9-qzmd8,Uid:8e3a567d-acd2-4bb0-bcdc-d60762255110,Namespace:calico-apiserver,Attempt:1,}" May 9 00:29:56.786261 systemd-networkd[1412]: calid8f438611ef: Link UP May 9 00:29:56.786544 systemd-networkd[1412]: calid8f438611ef: Gained carrier May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.658 [INFO][4953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0 calico-apiserver-54f6f995b9- calico-apiserver 8e3a567d-acd2-4bb0-bcdc-d60762255110 911 0 2025-05-09 00:29:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54f6f995b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54f6f995b9-qzmd8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid8f438611ef [] []}} ContainerID="d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-qzmd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-" May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.659 [INFO][4953] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-qzmd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.707 [INFO][4970] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" HandleID="k8s-pod-network.d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" Workload="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.719 [INFO][4970] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" HandleID="k8s-pod-network.d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" Workload="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000364450), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54f6f995b9-qzmd8", "timestamp":"2025-05-09 00:29:56.707875816 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.720 [INFO][4970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.720 [INFO][4970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.720 [INFO][4970] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.726 [INFO][4970] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" host="localhost" May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.736 [INFO][4970] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.746 [INFO][4970] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.750 [INFO][4970] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.755 [INFO][4970] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.755 [INFO][4970] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" host="localhost" May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.758 [INFO][4970] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7 May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.765 [INFO][4970] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" host="localhost" May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.773 [INFO][4970] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" host="localhost" May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.773 [INFO][4970] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" host="localhost" May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.774 [INFO][4970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:29:56.807442 containerd[1475]: 2025-05-09 00:29:56.774 [INFO][4970] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" HandleID="k8s-pod-network.d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" Workload="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:29:56.808223 containerd[1475]: 2025-05-09 00:29:56.781 [INFO][4953] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-qzmd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0", GenerateName:"calico-apiserver-54f6f995b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e3a567d-acd2-4bb0-bcdc-d60762255110", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f6f995b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54f6f995b9-qzmd8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8f438611ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:29:56.808223 containerd[1475]: 2025-05-09 00:29:56.783 [INFO][4953] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-qzmd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:29:56.808223 containerd[1475]: 2025-05-09 00:29:56.784 [INFO][4953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8f438611ef ContainerID="d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-qzmd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:29:56.808223 containerd[1475]: 2025-05-09 00:29:56.786 [INFO][4953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-qzmd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:29:56.808223 containerd[1475]: 2025-05-09 00:29:56.786 [INFO][4953] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-qzmd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0", GenerateName:"calico-apiserver-54f6f995b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e3a567d-acd2-4bb0-bcdc-d60762255110", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f6f995b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7", Pod:"calico-apiserver-54f6f995b9-qzmd8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8f438611ef", MAC:"86:37:84:41:fb:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:29:56.808223 containerd[1475]: 2025-05-09 00:29:56.803 [INFO][4953] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7" Namespace="calico-apiserver" Pod="calico-apiserver-54f6f995b9-qzmd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:29:56.850308 containerd[1475]: time="2025-05-09T00:29:56.849944930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:29:56.850308 containerd[1475]: time="2025-05-09T00:29:56.850061238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:29:56.850308 containerd[1475]: time="2025-05-09T00:29:56.850083981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:56.850308 containerd[1475]: time="2025-05-09T00:29:56.850209267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:29:56.894236 systemd[1]: Started cri-containerd-d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7.scope - libcontainer container d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7. May 9 00:29:56.929866 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:29:56.973168 containerd[1475]: time="2025-05-09T00:29:56.973077318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f6f995b9-qzmd8,Uid:8e3a567d-acd2-4bb0-bcdc-d60762255110,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7\"" May 9 00:29:57.271125 systemd-networkd[1412]: calibc0d8fb0843: Gained IPv6LL May 9 00:29:57.512940 kubelet[2506]: E0509 00:29:57.511675 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:57.515269 kubelet[2506]: E0509 00:29:57.514932 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:29:57.515269 kubelet[2506]: I0509 00:29:57.515056 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:29:57.972157 systemd-networkd[1412]: calid8f438611ef: Gained IPv6LL May 9 00:29:58.128426 systemd[1]: Started sshd@10-10.0.0.48:22-10.0.0.1:53984.service - OpenSSH per-connection server daemon (10.0.0.1:53984). May 9 00:29:58.201220 sshd[5034]: Accepted publickey for core from 10.0.0.1 port 53984 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:29:58.204179 sshd[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:29:58.212183 systemd-logind[1463]: New session 11 of user core. May 9 00:29:58.223209 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 00:29:58.304611 containerd[1475]: time="2025-05-09T00:29:58.304553666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:58.307013 containerd[1475]: time="2025-05-09T00:29:58.306933224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 9 00:29:58.307059 containerd[1475]: time="2025-05-09T00:29:58.307032592Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:58.309304 containerd[1475]: time="2025-05-09T00:29:58.309264883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:29:58.309766 containerd[1475]: time="2025-05-09T00:29:58.309742520Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 3.026817663s" May 9 00:29:58.309818 containerd[1475]: time="2025-05-09T00:29:58.309772586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 9 00:29:58.311988 containerd[1475]: time="2025-05-09T00:29:58.311817036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 9 00:29:58.313824 containerd[1475]: time="2025-05-09T00:29:58.313783548Z" level=info msg="CreateContainer within sandbox \"98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 9 00:29:58.334793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1290890084.mount: Deactivated successfully. May 9 00:29:58.340499 containerd[1475]: time="2025-05-09T00:29:58.340369071Z" level=info msg="CreateContainer within sandbox \"98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6300c751768d97c58f822be2f461ed963bb9537ce876b6ae250efefbd83f1fd9\"" May 9 00:29:58.341123 containerd[1475]: time="2025-05-09T00:29:58.340866816Z" level=info msg="StartContainer for \"6300c751768d97c58f822be2f461ed963bb9537ce876b6ae250efefbd83f1fd9\"" May 9 00:29:58.380787 sshd[5034]: pam_unix(sshd:session): session closed for user core May 9 00:29:58.383121 systemd[1]: Started cri-containerd-6300c751768d97c58f822be2f461ed963bb9537ce876b6ae250efefbd83f1fd9.scope - libcontainer container 6300c751768d97c58f822be2f461ed963bb9537ce876b6ae250efefbd83f1fd9. May 9 00:29:58.387760 systemd[1]: sshd@10-10.0.0.48:22-10.0.0.1:53984.service: Deactivated successfully. May 9 00:29:58.389591 systemd[1]: session-11.scope: Deactivated successfully. May 9 00:29:58.391068 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. May 9 00:29:58.399163 systemd[1]: Started sshd@11-10.0.0.48:22-10.0.0.1:53990.service - OpenSSH per-connection server daemon (10.0.0.1:53990). May 9 00:29:58.400420 systemd-logind[1463]: Removed session 11. May 9 00:29:58.427698 sshd[5076]: Accepted publickey for core from 10.0.0.1 port 53990 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:29:58.429800 sshd[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:29:58.430606 containerd[1475]: time="2025-05-09T00:29:58.430561087Z" level=info msg="StartContainer for \"6300c751768d97c58f822be2f461ed963bb9537ce876b6ae250efefbd83f1fd9\" returns successfully" May 9 00:29:58.438051 systemd-logind[1463]: New session 12 of user core. May 9 00:29:58.446042 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 00:29:58.644004 sshd[5076]: pam_unix(sshd:session): session closed for user core May 9 00:29:58.653234 systemd[1]: sshd@11-10.0.0.48:22-10.0.0.1:53990.service: Deactivated successfully. May 9 00:29:58.656802 systemd[1]: session-12.scope: Deactivated successfully. May 9 00:29:58.665617 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. May 9 00:29:58.670430 systemd[1]: Started sshd@12-10.0.0.48:22-10.0.0.1:53996.service - OpenSSH per-connection server daemon (10.0.0.1:53996). May 9 00:29:58.673847 systemd-logind[1463]: Removed session 12. May 9 00:29:58.711350 sshd[5106]: Accepted publickey for core from 10.0.0.1 port 53996 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:29:58.713191 sshd[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:29:58.718220 systemd-logind[1463]: New session 13 of user core. May 9 00:29:58.732131 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 00:29:58.858997 sshd[5106]: pam_unix(sshd:session): session closed for user core May 9 00:29:58.863158 systemd[1]: sshd@12-10.0.0.48:22-10.0.0.1:53996.service: Deactivated successfully. May 9 00:29:58.865362 systemd[1]: session-13.scope: Deactivated successfully. May 9 00:29:58.866117 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. May 9 00:29:58.867937 systemd-logind[1463]: Removed session 13. May 9 00:29:59.520235 kubelet[2506]: I0509 00:29:59.520190 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:30:00.368589 containerd[1475]: time="2025-05-09T00:30:00.368517436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:00.373119 containerd[1475]: time="2025-05-09T00:30:00.373008628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 9 00:30:00.375075 containerd[1475]: time="2025-05-09T00:30:00.375005568Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:00.379463 containerd[1475]: time="2025-05-09T00:30:00.379089045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:00.393329 containerd[1475]: time="2025-05-09T00:30:00.392506697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.080657321s" May 9 00:30:00.394972 containerd[1475]: time="2025-05-09T00:30:00.393968021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 9 00:30:00.397444 containerd[1475]: time="2025-05-09T00:30:00.396045281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 9 00:30:00.408792 containerd[1475]: time="2025-05-09T00:30:00.408374300Z" level=info msg="CreateContainer within sandbox \"eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 9 00:30:00.484689 containerd[1475]: time="2025-05-09T00:30:00.484614056Z" level=info msg="CreateContainer within sandbox \"eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"026a93ff988417a1961e859ac171e95ba04baa7a026460f160882a78eae0dbe1\"" May 9 00:30:00.485856 containerd[1475]: time="2025-05-09T00:30:00.485596561Z" level=info msg="StartContainer for \"026a93ff988417a1961e859ac171e95ba04baa7a026460f160882a78eae0dbe1\"" May 9 00:30:00.571286 systemd[1]: Started cri-containerd-026a93ff988417a1961e859ac171e95ba04baa7a026460f160882a78eae0dbe1.scope - libcontainer container 026a93ff988417a1961e859ac171e95ba04baa7a026460f160882a78eae0dbe1. May 9 00:30:00.651729 containerd[1475]: time="2025-05-09T00:30:00.651327954Z" level=info msg="StartContainer for \"026a93ff988417a1961e859ac171e95ba04baa7a026460f160882a78eae0dbe1\" returns successfully" May 9 00:30:00.809067 containerd[1475]: time="2025-05-09T00:30:00.807942660Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:00.810375 containerd[1475]: time="2025-05-09T00:30:00.810287322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 9 00:30:00.812785 containerd[1475]: time="2025-05-09T00:30:00.812500468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 416.401445ms" May 9 00:30:00.812785 containerd[1475]: time="2025-05-09T00:30:00.812547977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 9 00:30:00.815490 containerd[1475]: time="2025-05-09T00:30:00.815224242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 9 00:30:00.818535 containerd[1475]: time="2025-05-09T00:30:00.818469345Z" level=info msg="CreateContainer within sandbox \"d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 9 00:30:00.846670 containerd[1475]: time="2025-05-09T00:30:00.846579674Z" level=info msg="CreateContainer within sandbox \"d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"915db4552a1e106d301b5ab5aa1f512911cae69b426657b81e81f1c52b085050\"" May 9 00:30:00.847315 containerd[1475]: time="2025-05-09T00:30:00.847282252Z" level=info msg="StartContainer for \"915db4552a1e106d301b5ab5aa1f512911cae69b426657b81e81f1c52b085050\"" May 9 00:30:00.885286 systemd[1]: Started cri-containerd-915db4552a1e106d301b5ab5aa1f512911cae69b426657b81e81f1c52b085050.scope - libcontainer container 915db4552a1e106d301b5ab5aa1f512911cae69b426657b81e81f1c52b085050. May 9 00:30:00.939406 containerd[1475]: time="2025-05-09T00:30:00.939194515Z" level=info msg="StartContainer for \"915db4552a1e106d301b5ab5aa1f512911cae69b426657b81e81f1c52b085050\" returns successfully" May 9 00:30:01.414322 kubelet[2506]: I0509 00:30:01.414145 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:30:01.470034 kubelet[2506]: I0509 00:30:01.469317 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54f6f995b9-66cpx" podStartSLOduration=30.287142488 podStartE2EDuration="35.469289664s" podCreationTimestamp="2025-05-09 00:29:26 +0000 UTC" firstStartedPulling="2025-05-09 00:29:53.128727641 +0000 UTC m=+38.884390907" lastFinishedPulling="2025-05-09 00:29:58.310874816 +0000 UTC m=+44.066538083" observedRunningTime="2025-05-09 00:29:58.543436869 +0000 UTC m=+44.299100135" watchObservedRunningTime="2025-05-09 00:30:01.469289664 +0000 UTC m=+47.224952931" May 9 00:30:01.591067 kubelet[2506]: I0509 00:30:01.590972 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54f6f995b9-qzmd8" podStartSLOduration=31.751966404 podStartE2EDuration="35.590947141s" podCreationTimestamp="2025-05-09 00:29:26 +0000 UTC" firstStartedPulling="2025-05-09 00:29:56.975970602 +0000 UTC m=+42.731633868" lastFinishedPulling="2025-05-09 00:30:00.814951339 +0000 UTC m=+46.570614605" observedRunningTime="2025-05-09 00:30:01.589868415 +0000 UTC m=+47.345531701" watchObservedRunningTime="2025-05-09 00:30:01.590947141 +0000 UTC m=+47.346610407" May 9 00:30:03.468785 containerd[1475]: time="2025-05-09T00:30:03.468703754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:03.524740 containerd[1475]: time="2025-05-09T00:30:03.524642174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 9 00:30:03.557876 containerd[1475]: time="2025-05-09T00:30:03.557779361Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:03.618197 containerd[1475]: time="2025-05-09T00:30:03.618124444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:30:03.619154 containerd[1475]: time="2025-05-09T00:30:03.619112538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.801657067s" May 9 00:30:03.619250 containerd[1475]: time="2025-05-09T00:30:03.619154888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 9 00:30:03.621454 containerd[1475]: time="2025-05-09T00:30:03.621413308Z" level=info msg="CreateContainer within sandbox \"eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 9 00:30:03.874195 systemd[1]: Started sshd@13-10.0.0.48:22-10.0.0.1:54012.service - OpenSSH per-connection server daemon (10.0.0.1:54012). May 9 00:30:03.895176 containerd[1475]: time="2025-05-09T00:30:03.894987013Z" level=info msg="CreateContainer within sandbox \"eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"31d9bc268e2b3f2a2cc9e6f67a9fd393d164298c6be91fdeec65b00533e71797\"" May 9 00:30:03.896227 containerd[1475]: time="2025-05-09T00:30:03.896156338Z" level=info msg="StartContainer for \"31d9bc268e2b3f2a2cc9e6f67a9fd393d164298c6be91fdeec65b00533e71797\"" May 9 00:30:03.967509 systemd[1]: Started cri-containerd-31d9bc268e2b3f2a2cc9e6f67a9fd393d164298c6be91fdeec65b00533e71797.scope - libcontainer container 31d9bc268e2b3f2a2cc9e6f67a9fd393d164298c6be91fdeec65b00533e71797. May 9 00:30:03.970592 sshd[5217]: Accepted publickey for core from 10.0.0.1 port 54012 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:30:03.972836 sshd[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:30:03.978769 systemd-logind[1463]: New session 14 of user core. May 9 00:30:03.988143 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 00:30:04.222104 containerd[1475]: time="2025-05-09T00:30:04.222041815Z" level=info msg="StartContainer for \"31d9bc268e2b3f2a2cc9e6f67a9fd393d164298c6be91fdeec65b00533e71797\" returns successfully" May 9 00:30:04.305773 sshd[5217]: pam_unix(sshd:session): session closed for user core May 9 00:30:04.310513 systemd[1]: sshd@13-10.0.0.48:22-10.0.0.1:54012.service: Deactivated successfully. May 9 00:30:04.312733 systemd[1]: session-14.scope: Deactivated successfully. May 9 00:30:04.313402 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. May 9 00:30:04.314627 systemd-logind[1463]: Removed session 14. May 9 00:30:04.407310 kubelet[2506]: I0509 00:30:04.407260 2506 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 9 00:30:04.407310 kubelet[2506]: I0509 00:30:04.407308 2506 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 9 00:30:04.582930 kubelet[2506]: I0509 00:30:04.582465 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vj68l" podStartSLOduration=30.85464528 podStartE2EDuration="38.582443043s" podCreationTimestamp="2025-05-09 00:29:26 +0000 UTC" firstStartedPulling="2025-05-09 00:29:55.89211604 +0000 UTC m=+41.647779306" lastFinishedPulling="2025-05-09 00:30:03.619913803 +0000 UTC m=+49.375577069" observedRunningTime="2025-05-09 00:30:04.58231877 +0000 UTC m=+50.337982056" watchObservedRunningTime="2025-05-09 00:30:04.582443043 +0000 UTC m=+50.338106309" May 9 00:30:05.672137 kubelet[2506]: I0509 00:30:05.672068 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:30:09.328092 systemd[1]: Started sshd@14-10.0.0.48:22-10.0.0.1:47942.service - OpenSSH per-connection server daemon (10.0.0.1:47942). May 9 00:30:09.382168 sshd[5326]: Accepted publickey for core from 10.0.0.1 port 47942 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:30:09.384442 sshd[5326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:30:09.389578 systemd-logind[1463]: New session 15 of user core. May 9 00:30:09.397221 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 00:30:09.521794 sshd[5326]: pam_unix(sshd:session): session closed for user core May 9 00:30:09.526057 systemd[1]: sshd@14-10.0.0.48:22-10.0.0.1:47942.service: Deactivated successfully. May 9 00:30:09.528452 systemd[1]: session-15.scope: Deactivated successfully. May 9 00:30:09.529223 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. May 9 00:30:09.530352 systemd-logind[1463]: Removed session 15. May 9 00:30:14.324288 containerd[1475]: time="2025-05-09T00:30:14.324220594Z" level=info msg="StopPodSandbox for \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\"" May 9 00:30:14.409852 containerd[1475]: 2025-05-09 00:30:14.367 [WARNING][5360] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0", GenerateName:"calico-kube-controllers-78559bdf4b-", Namespace:"calico-system", SelfLink:"", UID:"c912a440-377c-4394-b47f-ba521b174b03", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78559bdf4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f", Pod:"calico-kube-controllers-78559bdf4b-jb9rh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f74f2174f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:14.409852 containerd[1475]: 2025-05-09 00:30:14.368 [INFO][5360] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" May 9 00:30:14.409852 containerd[1475]: 2025-05-09 00:30:14.368 [INFO][5360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" iface="eth0" netns="" May 9 00:30:14.409852 containerd[1475]: 2025-05-09 00:30:14.368 [INFO][5360] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" May 9 00:30:14.409852 containerd[1475]: 2025-05-09 00:30:14.368 [INFO][5360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" May 9 00:30:14.409852 containerd[1475]: 2025-05-09 00:30:14.396 [INFO][5371] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" HandleID="k8s-pod-network.a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" Workload="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:30:14.409852 containerd[1475]: 2025-05-09 00:30:14.396 [INFO][5371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:14.409852 containerd[1475]: 2025-05-09 00:30:14.396 [INFO][5371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:14.409852 containerd[1475]: 2025-05-09 00:30:14.403 [WARNING][5371] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" HandleID="k8s-pod-network.a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" Workload="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:30:14.409852 containerd[1475]: 2025-05-09 00:30:14.403 [INFO][5371] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" HandleID="k8s-pod-network.a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" Workload="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:30:14.409852 containerd[1475]: 2025-05-09 00:30:14.404 [INFO][5371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:14.409852 containerd[1475]: 2025-05-09 00:30:14.406 [INFO][5360] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" May 9 00:30:14.410738 containerd[1475]: time="2025-05-09T00:30:14.409908763Z" level=info msg="TearDown network for sandbox \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\" successfully" May 9 00:30:14.410738 containerd[1475]: time="2025-05-09T00:30:14.409941205Z" level=info msg="StopPodSandbox for \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\" returns successfully" May 9 00:30:14.410738 containerd[1475]: time="2025-05-09T00:30:14.410639260Z" level=info msg="RemovePodSandbox for \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\"" May 9 00:30:14.412847 containerd[1475]: time="2025-05-09T00:30:14.412817238Z" level=info msg="Forcibly stopping sandbox \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\"" May 9 00:30:14.489900 containerd[1475]: 2025-05-09 00:30:14.449 [WARNING][5393] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0", GenerateName:"calico-kube-controllers-78559bdf4b-", Namespace:"calico-system", SelfLink:"", UID:"c912a440-377c-4394-b47f-ba521b174b03", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78559bdf4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad5545a85ae493cb54e72bf70c81e7048d9d0af74086ed81c8b04cf473a4bc7f", Pod:"calico-kube-controllers-78559bdf4b-jb9rh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f74f2174f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:14.489900 containerd[1475]: 2025-05-09 00:30:14.449 [INFO][5393] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" May 9 00:30:14.489900 containerd[1475]: 2025-05-09 00:30:14.449 [INFO][5393] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" iface="eth0" netns="" May 9 00:30:14.489900 containerd[1475]: 2025-05-09 00:30:14.449 [INFO][5393] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" May 9 00:30:14.489900 containerd[1475]: 2025-05-09 00:30:14.449 [INFO][5393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" May 9 00:30:14.489900 containerd[1475]: 2025-05-09 00:30:14.474 [INFO][5401] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" HandleID="k8s-pod-network.a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" Workload="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:30:14.489900 containerd[1475]: 2025-05-09 00:30:14.474 [INFO][5401] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:14.489900 containerd[1475]: 2025-05-09 00:30:14.474 [INFO][5401] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:14.489900 containerd[1475]: 2025-05-09 00:30:14.481 [WARNING][5401] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" HandleID="k8s-pod-network.a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" Workload="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:30:14.489900 containerd[1475]: 2025-05-09 00:30:14.481 [INFO][5401] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" HandleID="k8s-pod-network.a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" Workload="localhost-k8s-calico--kube--controllers--78559bdf4b--jb9rh-eth0" May 9 00:30:14.489900 containerd[1475]: 2025-05-09 00:30:14.484 [INFO][5401] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:14.489900 containerd[1475]: 2025-05-09 00:30:14.486 [INFO][5393] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4" May 9 00:30:14.490515 containerd[1475]: time="2025-05-09T00:30:14.489931976Z" level=info msg="TearDown network for sandbox \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\" successfully" May 9 00:30:14.503691 containerd[1475]: time="2025-05-09T00:30:14.503630313Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:30:14.503691 containerd[1475]: time="2025-05-09T00:30:14.503693475Z" level=info msg="RemovePodSandbox \"a6dfa11b3cb348dbd5ddfda295256951d74f7e0d84f01ef320fc31acbe7826c4\" returns successfully" May 9 00:30:14.504378 containerd[1475]: time="2025-05-09T00:30:14.504346642Z" level=info msg="StopPodSandbox for \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\"" May 9 00:30:14.545691 systemd[1]: Started sshd@15-10.0.0.48:22-10.0.0.1:47954.service - OpenSSH per-connection server daemon (10.0.0.1:47954). May 9 00:30:14.572925 containerd[1475]: 2025-05-09 00:30:14.540 [WARNING][5424] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vj68l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7888b668-be66-4298-b746-119a722815e9", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7", Pod:"csi-node-driver-vj68l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc0d8fb0843", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:14.572925 containerd[1475]: 2025-05-09 00:30:14.540 [INFO][5424] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" May 9 00:30:14.572925 containerd[1475]: 2025-05-09 00:30:14.540 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" iface="eth0" netns="" May 9 00:30:14.572925 containerd[1475]: 2025-05-09 00:30:14.540 [INFO][5424] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" May 9 00:30:14.572925 containerd[1475]: 2025-05-09 00:30:14.540 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" May 9 00:30:14.572925 containerd[1475]: 2025-05-09 00:30:14.560 [INFO][5434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" HandleID="k8s-pod-network.f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" Workload="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:30:14.572925 containerd[1475]: 2025-05-09 00:30:14.561 [INFO][5434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:14.572925 containerd[1475]: 2025-05-09 00:30:14.561 [INFO][5434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:14.572925 containerd[1475]: 2025-05-09 00:30:14.567 [WARNING][5434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" HandleID="k8s-pod-network.f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" Workload="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:30:14.572925 containerd[1475]: 2025-05-09 00:30:14.567 [INFO][5434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" HandleID="k8s-pod-network.f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" Workload="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:30:14.572925 containerd[1475]: 2025-05-09 00:30:14.568 [INFO][5434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:14.572925 containerd[1475]: 2025-05-09 00:30:14.570 [INFO][5424] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" May 9 00:30:14.573393 containerd[1475]: time="2025-05-09T00:30:14.572979229Z" level=info msg="TearDown network for sandbox \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\" successfully" May 9 00:30:14.573393 containerd[1475]: time="2025-05-09T00:30:14.573012403Z" level=info msg="StopPodSandbox for \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\" returns successfully" May 9 00:30:14.573661 containerd[1475]: time="2025-05-09T00:30:14.573604111Z" level=info msg="RemovePodSandbox for \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\"" May 9 00:30:14.573661 containerd[1475]: time="2025-05-09T00:30:14.573656864Z" level=info msg="Forcibly stopping sandbox \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\"" May 9 00:30:14.593248 sshd[5432]: Accepted publickey for core from 10.0.0.1 port 47954 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:30:14.595992 sshd[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:30:14.601227 systemd-logind[1463]: New session 16 of user core. May 9 00:30:14.608077 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 00:30:14.652616 containerd[1475]: 2025-05-09 00:30:14.613 [WARNING][5458] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vj68l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7888b668-be66-4298-b746-119a722815e9", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb9c6934b3042c3f0b0b60ca128de23f87546dc3fa3896c789913f20bdc3a4b7", Pod:"csi-node-driver-vj68l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc0d8fb0843", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:14.652616 containerd[1475]: 2025-05-09 00:30:14.613 [INFO][5458] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" May 9 00:30:14.652616 containerd[1475]: 2025-05-09 00:30:14.613 [INFO][5458] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" iface="eth0" netns="" May 9 00:30:14.652616 containerd[1475]: 2025-05-09 00:30:14.613 [INFO][5458] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" May 9 00:30:14.652616 containerd[1475]: 2025-05-09 00:30:14.613 [INFO][5458] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" May 9 00:30:14.652616 containerd[1475]: 2025-05-09 00:30:14.635 [INFO][5467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" HandleID="k8s-pod-network.f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" Workload="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:30:14.652616 containerd[1475]: 2025-05-09 00:30:14.635 [INFO][5467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:14.652616 containerd[1475]: 2025-05-09 00:30:14.635 [INFO][5467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:14.652616 containerd[1475]: 2025-05-09 00:30:14.642 [WARNING][5467] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" HandleID="k8s-pod-network.f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" Workload="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:30:14.652616 containerd[1475]: 2025-05-09 00:30:14.642 [INFO][5467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" HandleID="k8s-pod-network.f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" Workload="localhost-k8s-csi--node--driver--vj68l-eth0" May 9 00:30:14.652616 containerd[1475]: 2025-05-09 00:30:14.645 [INFO][5467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:14.652616 containerd[1475]: 2025-05-09 00:30:14.648 [INFO][5458] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe" May 9 00:30:14.660549 containerd[1475]: time="2025-05-09T00:30:14.652674716Z" level=info msg="TearDown network for sandbox \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\" successfully" May 9 00:30:14.667293 containerd[1475]: time="2025-05-09T00:30:14.667154630Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:30:14.667293 containerd[1475]: time="2025-05-09T00:30:14.667267278Z" level=info msg="RemovePodSandbox \"f36738657f9783f3743eacfb4f951703a700cdb73e45167d1a25aa1453a035fe\" returns successfully" May 9 00:30:14.668375 containerd[1475]: time="2025-05-09T00:30:14.668012805Z" level=info msg="StopPodSandbox for \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\"" May 9 00:30:14.858740 containerd[1475]: 2025-05-09 00:30:14.775 [WARNING][5496] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"035e105a-b89a-4204-9406-d96aaeb0e048", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983", Pod:"coredns-6f6b679f8f-7z6wr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee5bf67afd8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:14.858740 containerd[1475]: 2025-05-09 00:30:14.775 [INFO][5496] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" May 9 00:30:14.858740 containerd[1475]: 2025-05-09 00:30:14.776 [INFO][5496] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" iface="eth0" netns="" May 9 00:30:14.858740 containerd[1475]: 2025-05-09 00:30:14.776 [INFO][5496] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" May 9 00:30:14.858740 containerd[1475]: 2025-05-09 00:30:14.776 [INFO][5496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" May 9 00:30:14.858740 containerd[1475]: 2025-05-09 00:30:14.820 [INFO][5505] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" HandleID="k8s-pod-network.d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" Workload="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:30:14.858740 containerd[1475]: 2025-05-09 00:30:14.820 [INFO][5505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:14.858740 containerd[1475]: 2025-05-09 00:30:14.820 [INFO][5505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:14.858740 containerd[1475]: 2025-05-09 00:30:14.845 [WARNING][5505] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" HandleID="k8s-pod-network.d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" Workload="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:30:14.858740 containerd[1475]: 2025-05-09 00:30:14.846 [INFO][5505] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" HandleID="k8s-pod-network.d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" Workload="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:30:14.858740 containerd[1475]: 2025-05-09 00:30:14.849 [INFO][5505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:14.858740 containerd[1475]: 2025-05-09 00:30:14.853 [INFO][5496] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" May 9 00:30:14.858740 containerd[1475]: time="2025-05-09T00:30:14.858625458Z" level=info msg="TearDown network for sandbox \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\" successfully" May 9 00:30:14.858740 containerd[1475]: time="2025-05-09T00:30:14.858658712Z" level=info msg="StopPodSandbox for \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\" returns successfully" May 9 00:30:14.859387 containerd[1475]: time="2025-05-09T00:30:14.859251242Z" level=info msg="RemovePodSandbox for \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\"" May 9 00:30:14.859387 containerd[1475]: time="2025-05-09T00:30:14.859282142Z" level=info msg="Forcibly stopping sandbox \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\"" May 9 00:30:14.866609 sshd[5432]: pam_unix(sshd:session): session closed for user core May 9 00:30:14.874915 systemd[1]: sshd@15-10.0.0.48:22-10.0.0.1:47954.service: Deactivated successfully. May 9 00:30:14.878520 systemd[1]: session-16.scope: Deactivated successfully. May 9 00:30:14.883194 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. May 9 00:30:14.889833 systemd-logind[1463]: Removed session 16. May 9 00:30:14.999226 containerd[1475]: 2025-05-09 00:30:14.935 [WARNING][5528] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"035e105a-b89a-4204-9406-d96aaeb0e048", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8aa06c5434252b666e03c04f920a56da2ed1147c3e09c1ec706c89cc220ff983", Pod:"coredns-6f6b679f8f-7z6wr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee5bf67afd8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:14.999226 containerd[1475]: 2025-05-09 00:30:14.936 [INFO][5528] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" May 9 00:30:14.999226 containerd[1475]: 2025-05-09 00:30:14.936 [INFO][5528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" iface="eth0" netns="" May 9 00:30:14.999226 containerd[1475]: 2025-05-09 00:30:14.936 [INFO][5528] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" May 9 00:30:14.999226 containerd[1475]: 2025-05-09 00:30:14.936 [INFO][5528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" May 9 00:30:14.999226 containerd[1475]: 2025-05-09 00:30:14.979 [INFO][5538] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" HandleID="k8s-pod-network.d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" Workload="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:30:14.999226 containerd[1475]: 2025-05-09 00:30:14.979 [INFO][5538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:14.999226 containerd[1475]: 2025-05-09 00:30:14.980 [INFO][5538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:14.999226 containerd[1475]: 2025-05-09 00:30:14.988 [WARNING][5538] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" HandleID="k8s-pod-network.d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" Workload="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:30:14.999226 containerd[1475]: 2025-05-09 00:30:14.988 [INFO][5538] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" HandleID="k8s-pod-network.d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" Workload="localhost-k8s-coredns--6f6b679f8f--7z6wr-eth0" May 9 00:30:14.999226 containerd[1475]: 2025-05-09 00:30:14.992 [INFO][5538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:14.999226 containerd[1475]: 2025-05-09 00:30:14.995 [INFO][5528] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3" May 9 00:30:14.999967 containerd[1475]: time="2025-05-09T00:30:14.999286631Z" level=info msg="TearDown network for sandbox \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\" successfully" May 9 00:30:15.009444 containerd[1475]: time="2025-05-09T00:30:15.009347954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:30:15.009673 containerd[1475]: time="2025-05-09T00:30:15.009471854Z" level=info msg="RemovePodSandbox \"d43be5324c6a254b98b87597587cd50ccb0abb0eb38ebf7e419b8973d5c9f9b3\" returns successfully" May 9 00:30:15.010330 containerd[1475]: time="2025-05-09T00:30:15.010275122Z" level=info msg="StopPodSandbox for \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\"" May 9 00:30:15.126303 containerd[1475]: 2025-05-09 00:30:15.077 [WARNING][5560] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0", GenerateName:"calico-apiserver-54f6f995b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e3a567d-acd2-4bb0-bcdc-d60762255110", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f6f995b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7", Pod:"calico-apiserver-54f6f995b9-qzmd8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8f438611ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:15.126303 containerd[1475]: 2025-05-09 00:30:15.077 [INFO][5560] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" May 9 00:30:15.126303 containerd[1475]: 2025-05-09 00:30:15.077 [INFO][5560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" iface="eth0" netns="" May 9 00:30:15.126303 containerd[1475]: 2025-05-09 00:30:15.077 [INFO][5560] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" May 9 00:30:15.126303 containerd[1475]: 2025-05-09 00:30:15.077 [INFO][5560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" May 9 00:30:15.126303 containerd[1475]: 2025-05-09 00:30:15.109 [INFO][5568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" HandleID="k8s-pod-network.dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" Workload="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:30:15.126303 containerd[1475]: 2025-05-09 00:30:15.109 [INFO][5568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:15.126303 containerd[1475]: 2025-05-09 00:30:15.109 [INFO][5568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:15.126303 containerd[1475]: 2025-05-09 00:30:15.117 [WARNING][5568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" HandleID="k8s-pod-network.dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" Workload="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:30:15.126303 containerd[1475]: 2025-05-09 00:30:15.117 [INFO][5568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" HandleID="k8s-pod-network.dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" Workload="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:30:15.126303 containerd[1475]: 2025-05-09 00:30:15.120 [INFO][5568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:15.126303 containerd[1475]: 2025-05-09 00:30:15.122 [INFO][5560] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" May 9 00:30:15.126303 containerd[1475]: time="2025-05-09T00:30:15.126280612Z" level=info msg="TearDown network for sandbox \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\" successfully" May 9 00:30:15.126792 containerd[1475]: time="2025-05-09T00:30:15.126315830Z" level=info msg="StopPodSandbox for \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\" returns successfully" May 9 00:30:15.128642 containerd[1475]: time="2025-05-09T00:30:15.128143103Z" level=info msg="RemovePodSandbox for \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\"" May 9 00:30:15.128642 containerd[1475]: time="2025-05-09T00:30:15.128205414Z" level=info msg="Forcibly stopping sandbox \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\"" May 9 00:30:15.212243 containerd[1475]: 2025-05-09 00:30:15.175 [WARNING][5590] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0", GenerateName:"calico-apiserver-54f6f995b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e3a567d-acd2-4bb0-bcdc-d60762255110", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f6f995b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9990100586ffde5d90751338577ceeb52ed9d30203329ecc04d6a3f308997f7", Pod:"calico-apiserver-54f6f995b9-qzmd8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8f438611ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:15.212243 containerd[1475]: 2025-05-09 00:30:15.176 [INFO][5590] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" May 9 00:30:15.212243 containerd[1475]: 2025-05-09 00:30:15.176 [INFO][5590] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" iface="eth0" netns="" May 9 00:30:15.212243 containerd[1475]: 2025-05-09 00:30:15.176 [INFO][5590] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" May 9 00:30:15.212243 containerd[1475]: 2025-05-09 00:30:15.176 [INFO][5590] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" May 9 00:30:15.212243 containerd[1475]: 2025-05-09 00:30:15.198 [INFO][5598] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" HandleID="k8s-pod-network.dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" Workload="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:30:15.212243 containerd[1475]: 2025-05-09 00:30:15.198 [INFO][5598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:15.212243 containerd[1475]: 2025-05-09 00:30:15.198 [INFO][5598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:15.212243 containerd[1475]: 2025-05-09 00:30:15.205 [WARNING][5598] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" HandleID="k8s-pod-network.dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" Workload="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:30:15.212243 containerd[1475]: 2025-05-09 00:30:15.205 [INFO][5598] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" HandleID="k8s-pod-network.dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" Workload="localhost-k8s-calico--apiserver--54f6f995b9--qzmd8-eth0" May 9 00:30:15.212243 containerd[1475]: 2025-05-09 00:30:15.206 [INFO][5598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:15.212243 containerd[1475]: 2025-05-09 00:30:15.209 [INFO][5590] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086" May 9 00:30:15.212824 containerd[1475]: time="2025-05-09T00:30:15.212291010Z" level=info msg="TearDown network for sandbox \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\" successfully" May 9 00:30:15.217054 containerd[1475]: time="2025-05-09T00:30:15.216973379Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:30:15.217241 containerd[1475]: time="2025-05-09T00:30:15.217078714Z" level=info msg="RemovePodSandbox \"dc2d36004e5f5a123829046dd60eb3b3c7f628719b87a984e0b9e912fbb74086\" returns successfully" May 9 00:30:15.217727 containerd[1475]: time="2025-05-09T00:30:15.217692293Z" level=info msg="StopPodSandbox for \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\"" May 9 00:30:15.294713 containerd[1475]: 2025-05-09 00:30:15.258 [WARNING][5621] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0", GenerateName:"calico-apiserver-54f6f995b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcb70e53-0375-414e-8457-33f88270eeb6", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f6f995b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f", Pod:"calico-apiserver-54f6f995b9-66cpx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07b00bf08c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:15.294713 containerd[1475]: 2025-05-09 00:30:15.259 [INFO][5621] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" May 9 00:30:15.294713 containerd[1475]: 2025-05-09 00:30:15.259 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" iface="eth0" netns="" May 9 00:30:15.294713 containerd[1475]: 2025-05-09 00:30:15.259 [INFO][5621] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" May 9 00:30:15.294713 containerd[1475]: 2025-05-09 00:30:15.259 [INFO][5621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" May 9 00:30:15.294713 containerd[1475]: 2025-05-09 00:30:15.281 [INFO][5629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" HandleID="k8s-pod-network.f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" Workload="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:30:15.294713 containerd[1475]: 2025-05-09 00:30:15.281 [INFO][5629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:15.294713 containerd[1475]: 2025-05-09 00:30:15.281 [INFO][5629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:15.294713 containerd[1475]: 2025-05-09 00:30:15.286 [WARNING][5629] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" HandleID="k8s-pod-network.f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" Workload="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:30:15.294713 containerd[1475]: 2025-05-09 00:30:15.286 [INFO][5629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" HandleID="k8s-pod-network.f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" Workload="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:30:15.294713 containerd[1475]: 2025-05-09 00:30:15.287 [INFO][5629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:15.294713 containerd[1475]: 2025-05-09 00:30:15.290 [INFO][5621] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" May 9 00:30:15.295210 containerd[1475]: time="2025-05-09T00:30:15.294765211Z" level=info msg="TearDown network for sandbox \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\" successfully" May 9 00:30:15.295210 containerd[1475]: time="2025-05-09T00:30:15.294793566Z" level=info msg="StopPodSandbox for \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\" returns successfully" May 9 00:30:15.295341 containerd[1475]: time="2025-05-09T00:30:15.295318063Z" level=info msg="RemovePodSandbox for \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\"" May 9 00:30:15.295421 containerd[1475]: time="2025-05-09T00:30:15.295346027Z" level=info msg="Forcibly stopping sandbox \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\"" May 9 00:30:15.361229 containerd[1475]: 2025-05-09 00:30:15.330 [WARNING][5652] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0", GenerateName:"calico-apiserver-54f6f995b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcb70e53-0375-414e-8457-33f88270eeb6", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f6f995b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98bac03dcecb2bd5d2f44be12ccc6bde7773c7c333b4924cf2f1d98efda7257f", Pod:"calico-apiserver-54f6f995b9-66cpx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07b00bf08c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:15.361229 containerd[1475]: 2025-05-09 00:30:15.331 [INFO][5652] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" May 9 00:30:15.361229 containerd[1475]: 2025-05-09 00:30:15.331 [INFO][5652] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" iface="eth0" netns="" May 9 00:30:15.361229 containerd[1475]: 2025-05-09 00:30:15.331 [INFO][5652] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" May 9 00:30:15.361229 containerd[1475]: 2025-05-09 00:30:15.331 [INFO][5652] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" May 9 00:30:15.361229 containerd[1475]: 2025-05-09 00:30:15.350 [INFO][5660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" HandleID="k8s-pod-network.f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" Workload="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:30:15.361229 containerd[1475]: 2025-05-09 00:30:15.350 [INFO][5660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:15.361229 containerd[1475]: 2025-05-09 00:30:15.350 [INFO][5660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:15.361229 containerd[1475]: 2025-05-09 00:30:15.355 [WARNING][5660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" HandleID="k8s-pod-network.f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" Workload="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:30:15.361229 containerd[1475]: 2025-05-09 00:30:15.355 [INFO][5660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" HandleID="k8s-pod-network.f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" Workload="localhost-k8s-calico--apiserver--54f6f995b9--66cpx-eth0" May 9 00:30:15.361229 containerd[1475]: 2025-05-09 00:30:15.356 [INFO][5660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:15.361229 containerd[1475]: 2025-05-09 00:30:15.358 [INFO][5652] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce" May 9 00:30:15.361945 containerd[1475]: time="2025-05-09T00:30:15.361272523Z" level=info msg="TearDown network for sandbox \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\" successfully" May 9 00:30:15.365367 containerd[1475]: time="2025-05-09T00:30:15.365339689Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:30:15.365409 containerd[1475]: time="2025-05-09T00:30:15.365391840Z" level=info msg="RemovePodSandbox \"f6c5edeb788588ab9dba05ed96471478728efaf22402c4f4c944ca9830ba2fce\" returns successfully" May 9 00:30:15.365991 containerd[1475]: time="2025-05-09T00:30:15.365945433Z" level=info msg="StopPodSandbox for \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\"" May 9 00:30:15.434844 containerd[1475]: 2025-05-09 00:30:15.404 [WARNING][5682] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--7htnm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"75434938-61cf-41bc-bc17-24399a2f1b29", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e", Pod:"coredns-6f6b679f8f-7htnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid140413bf93", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:15.434844 containerd[1475]: 2025-05-09 00:30:15.404 [INFO][5682] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" May 9 00:30:15.434844 containerd[1475]: 2025-05-09 00:30:15.404 [INFO][5682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" iface="eth0" netns="" May 9 00:30:15.434844 containerd[1475]: 2025-05-09 00:30:15.404 [INFO][5682] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" May 9 00:30:15.434844 containerd[1475]: 2025-05-09 00:30:15.404 [INFO][5682] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" May 9 00:30:15.434844 containerd[1475]: 2025-05-09 00:30:15.423 [INFO][5690] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" HandleID="k8s-pod-network.41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" Workload="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:30:15.434844 containerd[1475]: 2025-05-09 00:30:15.423 [INFO][5690] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:15.434844 containerd[1475]: 2025-05-09 00:30:15.423 [INFO][5690] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:15.434844 containerd[1475]: 2025-05-09 00:30:15.428 [WARNING][5690] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" HandleID="k8s-pod-network.41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" Workload="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:30:15.434844 containerd[1475]: 2025-05-09 00:30:15.428 [INFO][5690] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" HandleID="k8s-pod-network.41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" Workload="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:30:15.434844 containerd[1475]: 2025-05-09 00:30:15.430 [INFO][5690] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:15.434844 containerd[1475]: 2025-05-09 00:30:15.432 [INFO][5682] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" May 9 00:30:15.435433 containerd[1475]: time="2025-05-09T00:30:15.434901692Z" level=info msg="TearDown network for sandbox \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\" successfully" May 9 00:30:15.435433 containerd[1475]: time="2025-05-09T00:30:15.434929587Z" level=info msg="StopPodSandbox for \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\" returns successfully" May 9 00:30:15.436100 containerd[1475]: time="2025-05-09T00:30:15.435580540Z" level=info msg="RemovePodSandbox for \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\"" May 9 00:30:15.436100 containerd[1475]: time="2025-05-09T00:30:15.435631097Z" level=info msg="Forcibly stopping sandbox \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\"" May 9 00:30:15.514235 containerd[1475]: 2025-05-09 00:30:15.477 [WARNING][5712] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--7htnm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"75434938-61cf-41bc-bc17-24399a2f1b29", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 0, 29, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c60caf5e913e818d19c3bcf95dcd90aec084935d7d5bdab85deabeb169f3817e", Pod:"coredns-6f6b679f8f-7htnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid140413bf93", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 00:30:15.514235 containerd[1475]: 2025-05-09 00:30:15.478 [INFO][5712] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" May 9 00:30:15.514235 containerd[1475]: 2025-05-09 00:30:15.478 [INFO][5712] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" iface="eth0" netns="" May 9 00:30:15.514235 containerd[1475]: 2025-05-09 00:30:15.478 [INFO][5712] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" May 9 00:30:15.514235 containerd[1475]: 2025-05-09 00:30:15.478 [INFO][5712] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" May 9 00:30:15.514235 containerd[1475]: 2025-05-09 00:30:15.502 [INFO][5721] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" HandleID="k8s-pod-network.41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" Workload="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:30:15.514235 containerd[1475]: 2025-05-09 00:30:15.502 [INFO][5721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 00:30:15.514235 containerd[1475]: 2025-05-09 00:30:15.502 [INFO][5721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 00:30:15.514235 containerd[1475]: 2025-05-09 00:30:15.508 [WARNING][5721] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" HandleID="k8s-pod-network.41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" Workload="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:30:15.514235 containerd[1475]: 2025-05-09 00:30:15.508 [INFO][5721] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" HandleID="k8s-pod-network.41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" Workload="localhost-k8s-coredns--6f6b679f8f--7htnm-eth0" May 9 00:30:15.514235 containerd[1475]: 2025-05-09 00:30:15.509 [INFO][5721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 00:30:15.514235 containerd[1475]: 2025-05-09 00:30:15.511 [INFO][5712] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9" May 9 00:30:15.514826 containerd[1475]: time="2025-05-09T00:30:15.514282295Z" level=info msg="TearDown network for sandbox \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\" successfully" May 9 00:30:15.518584 containerd[1475]: time="2025-05-09T00:30:15.518525744Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 9 00:30:15.518945 containerd[1475]: time="2025-05-09T00:30:15.518623824Z" level=info msg="RemovePodSandbox \"41762b40d8d344c60c8a644dcd6d495ff65c3ad2bd643d088969279b7d6bbec9\" returns successfully" May 9 00:30:19.891511 systemd[1]: Started sshd@16-10.0.0.48:22-10.0.0.1:52870.service - OpenSSH per-connection server daemon (10.0.0.1:52870). May 9 00:30:19.953498 sshd[5732]: Accepted publickey for core from 10.0.0.1 port 52870 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:30:19.956387 sshd[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:30:19.968247 systemd-logind[1463]: New session 17 of user core. May 9 00:30:19.977271 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 00:30:20.158137 sshd[5732]: pam_unix(sshd:session): session closed for user core May 9 00:30:20.182738 systemd[1]: sshd@16-10.0.0.48:22-10.0.0.1:52870.service: Deactivated successfully. May 9 00:30:20.191386 systemd[1]: session-17.scope: Deactivated successfully. May 9 00:30:20.199712 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. May 9 00:30:20.219570 systemd[1]: Started sshd@17-10.0.0.48:22-10.0.0.1:52876.service - OpenSSH per-connection server daemon (10.0.0.1:52876). May 9 00:30:20.221789 systemd-logind[1463]: Removed session 17. May 9 00:30:20.278490 sshd[5746]: Accepted publickey for core from 10.0.0.1 port 52876 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:30:20.287479 sshd[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:30:20.298231 systemd-logind[1463]: New session 18 of user core. May 9 00:30:20.310291 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 00:30:20.705743 sshd[5746]: pam_unix(sshd:session): session closed for user core May 9 00:30:20.718017 systemd[1]: sshd@17-10.0.0.48:22-10.0.0.1:52876.service: Deactivated successfully. May 9 00:30:20.720415 systemd[1]: session-18.scope: Deactivated successfully. May 9 00:30:20.722845 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. May 9 00:30:20.724616 systemd[1]: Started sshd@18-10.0.0.48:22-10.0.0.1:52890.service - OpenSSH per-connection server daemon (10.0.0.1:52890). May 9 00:30:20.725677 systemd-logind[1463]: Removed session 18. May 9 00:30:20.774970 sshd[5758]: Accepted publickey for core from 10.0.0.1 port 52890 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:30:20.776917 sshd[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:30:20.782950 systemd-logind[1463]: New session 19 of user core. May 9 00:30:20.788046 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 00:30:23.278077 sshd[5758]: pam_unix(sshd:session): session closed for user core May 9 00:30:23.295251 systemd[1]: sshd@18-10.0.0.48:22-10.0.0.1:52890.service: Deactivated successfully. May 9 00:30:23.298685 systemd[1]: session-19.scope: Deactivated successfully. May 9 00:30:23.303859 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. May 9 00:30:23.312908 systemd[1]: Started sshd@19-10.0.0.48:22-10.0.0.1:52898.service - OpenSSH per-connection server daemon (10.0.0.1:52898). May 9 00:30:23.315394 systemd-logind[1463]: Removed session 19. May 9 00:30:23.368563 sshd[5798]: Accepted publickey for core from 10.0.0.1 port 52898 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:30:23.371040 sshd[5798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:30:23.384363 systemd-logind[1463]: New session 20 of user core. May 9 00:30:23.393326 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 00:30:23.737406 kubelet[2506]: E0509 00:30:23.737045 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:30:23.971170 sshd[5798]: pam_unix(sshd:session): session closed for user core May 9 00:30:23.983555 systemd[1]: sshd@19-10.0.0.48:22-10.0.0.1:52898.service: Deactivated successfully. May 9 00:30:23.987423 systemd[1]: session-20.scope: Deactivated successfully. May 9 00:30:23.991553 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. May 9 00:30:23.997497 systemd[1]: Started sshd@20-10.0.0.48:22-10.0.0.1:52902.service - OpenSSH per-connection server daemon (10.0.0.1:52902). May 9 00:30:24.000017 systemd-logind[1463]: Removed session 20. May 9 00:30:24.045239 sshd[5833]: Accepted publickey for core from 10.0.0.1 port 52902 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:30:24.047867 sshd[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:30:24.058002 systemd-logind[1463]: New session 21 of user core. May 9 00:30:24.069255 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 00:30:24.229267 sshd[5833]: pam_unix(sshd:session): session closed for user core May 9 00:30:24.233686 systemd[1]: sshd@20-10.0.0.48:22-10.0.0.1:52902.service: Deactivated successfully. May 9 00:30:24.235796 systemd[1]: session-21.scope: Deactivated successfully. May 9 00:30:24.236615 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. May 9 00:30:24.237563 systemd-logind[1463]: Removed session 21. May 9 00:30:27.332164 kubelet[2506]: E0509 00:30:27.332103 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:30:29.240967 systemd[1]: Started sshd@21-10.0.0.48:22-10.0.0.1:57450.service - OpenSSH per-connection server daemon (10.0.0.1:57450). May 9 00:30:29.272828 sshd[5847]: Accepted publickey for core from 10.0.0.1 port 57450 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:30:29.274645 sshd[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:30:29.278635 systemd-logind[1463]: New session 22 of user core. May 9 00:30:29.285006 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 00:30:29.401081 sshd[5847]: pam_unix(sshd:session): session closed for user core May 9 00:30:29.406280 systemd[1]: sshd@21-10.0.0.48:22-10.0.0.1:57450.service: Deactivated successfully. May 9 00:30:29.409744 systemd[1]: session-22.scope: Deactivated successfully. May 9 00:30:29.410775 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. May 9 00:30:29.412107 systemd-logind[1463]: Removed session 22. May 9 00:30:34.342490 kubelet[2506]: E0509 00:30:34.342274 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:30:34.445960 systemd[1]: Started sshd@22-10.0.0.48:22-10.0.0.1:57464.service - OpenSSH per-connection server daemon (10.0.0.1:57464). May 9 00:30:34.550569 sshd[5871]: Accepted publickey for core from 10.0.0.1 port 57464 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:30:34.553426 sshd[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:30:34.571662 systemd-logind[1463]: New session 23 of user core. May 9 00:30:34.598485 systemd[1]: Started session-23.scope - Session 23 of User core. May 9 00:30:34.902564 sshd[5871]: pam_unix(sshd:session): session closed for user core May 9 00:30:34.910474 systemd[1]: sshd@22-10.0.0.48:22-10.0.0.1:57464.service: Deactivated successfully. May 9 00:30:34.914297 systemd[1]: session-23.scope: Deactivated successfully. May 9 00:30:34.919425 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. May 9 00:30:34.924034 systemd-logind[1463]: Removed session 23. May 9 00:30:39.917922 systemd[1]: Started sshd@23-10.0.0.48:22-10.0.0.1:57046.service - OpenSSH per-connection server daemon (10.0.0.1:57046). May 9 00:30:39.955093 sshd[5908]: Accepted publickey for core from 10.0.0.1 port 57046 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:30:39.957340 sshd[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:30:39.962658 systemd-logind[1463]: New session 24 of user core. May 9 00:30:39.978193 systemd[1]: Started session-24.scope - Session 24 of User core. May 9 00:30:40.121319 sshd[5908]: pam_unix(sshd:session): session closed for user core May 9 00:30:40.126152 systemd[1]: sshd@23-10.0.0.48:22-10.0.0.1:57046.service: Deactivated successfully. May 9 00:30:40.129165 systemd[1]: session-24.scope: Deactivated successfully. May 9 00:30:40.130523 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. May 9 00:30:40.131537 systemd-logind[1463]: Removed session 24. May 9 00:30:45.140945 systemd[1]: Started sshd@24-10.0.0.48:22-10.0.0.1:57050.service - OpenSSH per-connection server daemon (10.0.0.1:57050). May 9 00:30:45.183488 sshd[5922]: Accepted publickey for core from 10.0.0.1 port 57050 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:30:45.185822 sshd[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:30:45.190927 systemd-logind[1463]: New session 25 of user core. May 9 00:30:45.206251 systemd[1]: Started session-25.scope - Session 25 of User core. May 9 00:30:45.348340 sshd[5922]: pam_unix(sshd:session): session closed for user core May 9 00:30:45.354237 systemd[1]: sshd@24-10.0.0.48:22-10.0.0.1:57050.service: Deactivated successfully. May 9 00:30:45.357296 systemd[1]: session-25.scope: Deactivated successfully. May 9 00:30:45.360597 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. May 9 00:30:45.363734 systemd-logind[1463]: Removed session 25.