May 13 23:59:44.960028 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:19:41 -00 2025 May 13 23:59:44.960059 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9290c9b76db63811f0d205969a93d9b54c3ea10aed4e7b51abfb58e812a25e51 May 13 23:59:44.960076 kernel: BIOS-provided physical RAM map: May 13 23:59:44.960084 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 23:59:44.960092 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 13 23:59:44.960101 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 13 23:59:44.960112 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 13 23:59:44.960120 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 13 23:59:44.960129 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 13 23:59:44.960137 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 13 23:59:44.960145 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 13 23:59:44.960157 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 13 23:59:44.960165 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 13 23:59:44.960174 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 13 23:59:44.960195 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 13 23:59:44.960205 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 13 23:59:44.960218 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 13 23:59:44.960227 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 13 23:59:44.960236 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 13 23:59:44.960245 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 13 23:59:44.960254 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 13 23:59:44.960262 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 13 23:59:44.960271 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 13 23:59:44.960281 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 23:59:44.960289 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 13 23:59:44.960298 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 23:59:44.960308 kernel: NX (Execute Disable) protection: active May 13 23:59:44.960321 kernel: APIC: Static calls initialized May 13 23:59:44.960330 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 13 23:59:44.960339 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 13 23:59:44.960348 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 13 23:59:44.960357 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 13 23:59:44.960365 kernel: extended physical RAM map: May 13 23:59:44.960375 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 23:59:44.960384 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 13 23:59:44.960394 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 13 23:59:44.960403 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 13 23:59:44.960412 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 13 23:59:44.960421 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 13 23:59:44.960434 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 13 23:59:44.960448 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 13 23:59:44.960458 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 13 23:59:44.960468 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 13 23:59:44.960478 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 13 23:59:44.960487 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 13 23:59:44.960501 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 13 23:59:44.960511 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 13 23:59:44.960521 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 13 23:59:44.960530 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 13 23:59:44.960541 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 13 23:59:44.960551 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 13 23:59:44.960562 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 13 23:59:44.960572 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 13 23:59:44.960582 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 13 23:59:44.960596 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 13 23:59:44.960606 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 13 23:59:44.960616 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 13 23:59:44.960626 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 23:59:44.960636 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 13 23:59:44.960646 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 23:59:44.960655 kernel: efi: EFI v2.7 by EDK II May 13 23:59:44.960665 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 13 23:59:44.960717 kernel: random: crng init done May 13 23:59:44.960728 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 13 23:59:44.960738 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 13 23:59:44.960747 kernel: secureboot: Secure boot disabled May 13 23:59:44.960762 kernel: SMBIOS 2.8 present. May 13 23:59:44.960772 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 13 23:59:44.960782 kernel: Hypervisor detected: KVM May 13 23:59:44.960792 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 23:59:44.960802 kernel: kvm-clock: using sched offset of 2830667554 cycles May 13 23:59:44.960812 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 23:59:44.960822 kernel: tsc: Detected 2794.746 MHz processor May 13 23:59:44.960833 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 23:59:44.960843 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 23:59:44.960853 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 13 23:59:44.960867 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 13 23:59:44.960877 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 23:59:44.960888 kernel: Using GB pages for direct mapping May 13 23:59:44.960897 kernel: ACPI: Early table checksum verification disabled May 13 23:59:44.960908 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 13 23:59:44.960918 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 13 23:59:44.960929 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:59:44.960939 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:59:44.960949 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 13 23:59:44.960962 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:59:44.960973 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:59:44.960983 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:59:44.960994 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:59:44.961004 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 13 23:59:44.961014 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 13 23:59:44.961025 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 13 23:59:44.961035 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 13 23:59:44.961045 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 13 23:59:44.961059 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 13 23:59:44.961069 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 13 23:59:44.961079 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 13 23:59:44.961089 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 13 23:59:44.961100 kernel: No NUMA configuration found May 13 23:59:44.961111 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 13 23:59:44.961121 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 13 23:59:44.961132 kernel: Zone ranges: May 13 23:59:44.961143 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 23:59:44.961156 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 13 23:59:44.961167 kernel: Normal empty May 13 23:59:44.961177 kernel: Movable zone start for each node May 13 23:59:44.961197 kernel: Early memory node ranges May 13 23:59:44.961207 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 13 23:59:44.961217 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 13 23:59:44.961227 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 13 23:59:44.961237 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 13 23:59:44.961247 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 13 23:59:44.961257 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 13 23:59:44.961271 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 13 23:59:44.961281 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 13 23:59:44.961291 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 13 23:59:44.961302 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 23:59:44.961312 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 13 23:59:44.961332 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 13 23:59:44.961345 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 23:59:44.961356 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 13 23:59:44.961367 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 13 23:59:44.961377 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 13 23:59:44.961388 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 13 23:59:44.961399 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 13 23:59:44.961414 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 23:59:44.961425 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 23:59:44.961436 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 23:59:44.961448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 23:59:44.961459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 23:59:44.961473 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 23:59:44.961483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 23:59:44.961494 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 23:59:44.961504 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 23:59:44.961515 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 23:59:44.961525 kernel: TSC deadline timer available May 13 23:59:44.961536 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 23:59:44.961547 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 23:59:44.961557 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 23:59:44.961571 kernel: kvm-guest: setup PV sched yield May 13 23:59:44.961582 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 13 23:59:44.961592 kernel: Booting paravirtualized kernel on KVM May 13 23:59:44.961603 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 23:59:44.961614 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 13 23:59:44.961625 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 13 23:59:44.961636 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 13 23:59:44.961647 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 23:59:44.961658 kernel: kvm-guest: PV spinlocks enabled May 13 23:59:44.961698 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 23:59:44.961712 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9290c9b76db63811f0d205969a93d9b54c3ea10aed4e7b51abfb58e812a25e51 May 13 23:59:44.961723 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:59:44.961734 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:59:44.961746 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:59:44.961756 kernel: Fallback order for Node 0: 0 May 13 23:59:44.961767 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 13 23:59:44.961777 kernel: Policy zone: DMA32 May 13 23:59:44.961792 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:59:44.961803 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43480K init, 1596K bss, 177824K reserved, 0K cma-reserved) May 13 23:59:44.961814 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 23:59:44.961824 kernel: ftrace: allocating 37918 entries in 149 pages May 13 23:59:44.961835 kernel: ftrace: allocated 149 pages with 4 groups May 13 23:59:44.961845 kernel: Dynamic Preempt: voluntary May 13 23:59:44.961856 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:59:44.961867 kernel: rcu: RCU event tracing is enabled. May 13 23:59:44.961879 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 23:59:44.961893 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:59:44.961904 kernel: Rude variant of Tasks RCU enabled. May 13 23:59:44.961915 kernel: Tracing variant of Tasks RCU enabled. May 13 23:59:44.961925 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:59:44.961936 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 23:59:44.961946 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 23:59:44.961956 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:59:44.961965 kernel: Console: colour dummy device 80x25 May 13 23:59:44.961975 kernel: printk: console [ttyS0] enabled May 13 23:59:44.961988 kernel: ACPI: Core revision 20230628 May 13 23:59:44.961999 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 23:59:44.962009 kernel: APIC: Switch to symmetric I/O mode setup May 13 23:59:44.962020 kernel: x2apic enabled May 13 23:59:44.962030 kernel: APIC: Switched APIC routing to: physical x2apic May 13 23:59:44.962040 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 13 23:59:44.962051 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 13 23:59:44.962061 kernel: kvm-guest: setup PV IPIs May 13 23:59:44.962072 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 23:59:44.962085 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 23:59:44.962094 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 13 23:59:44.962104 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 23:59:44.962114 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 23:59:44.962124 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 23:59:44.962134 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 23:59:44.962143 kernel: Spectre V2 : Mitigation: Retpolines May 13 23:59:44.962153 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 23:59:44.962163 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 23:59:44.962175 kernel: RETBleed: Mitigation: untrained return thunk May 13 23:59:44.962194 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 23:59:44.962205 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 23:59:44.962215 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 13 23:59:44.962226 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 13 23:59:44.962237 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 13 23:59:44.962247 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 23:59:44.962256 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 23:59:44.962271 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 23:59:44.962281 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 23:59:44.962292 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 23:59:44.962304 kernel: Freeing SMP alternatives memory: 32K May 13 23:59:44.962315 kernel: pid_max: default: 32768 minimum: 301 May 13 23:59:44.962325 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:59:44.962336 kernel: landlock: Up and running. May 13 23:59:44.962347 kernel: SELinux: Initializing. May 13 23:59:44.962357 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:59:44.962371 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:59:44.962381 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 23:59:44.962392 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:59:44.962402 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:59:44.962413 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:59:44.962424 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 23:59:44.962434 kernel: ... version: 0 May 13 23:59:44.962445 kernel: ... bit width: 48 May 13 23:59:44.962455 kernel: ... generic registers: 6 May 13 23:59:44.962470 kernel: ... value mask: 0000ffffffffffff May 13 23:59:44.962481 kernel: ... max period: 00007fffffffffff May 13 23:59:44.962492 kernel: ... fixed-purpose events: 0 May 13 23:59:44.962501 kernel: ... event mask: 000000000000003f May 13 23:59:44.962512 kernel: signal: max sigframe size: 1776 May 13 23:59:44.962522 kernel: rcu: Hierarchical SRCU implementation. May 13 23:59:44.962532 kernel: rcu: Max phase no-delay instances is 400. May 13 23:59:44.962542 kernel: smp: Bringing up secondary CPUs ... May 13 23:59:44.962553 kernel: smpboot: x86: Booting SMP configuration: May 13 23:59:44.962563 kernel: .... node #0, CPUs: #1 #2 #3 May 13 23:59:44.962577 kernel: smp: Brought up 1 node, 4 CPUs May 13 23:59:44.962587 kernel: smpboot: Max logical packages: 1 May 13 23:59:44.962597 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 13 23:59:44.962608 kernel: devtmpfs: initialized May 13 23:59:44.962618 kernel: x86/mm: Memory block size: 128MB May 13 23:59:44.962628 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 13 23:59:44.962638 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 13 23:59:44.962647 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 13 23:59:44.962658 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 13 23:59:44.962672 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 13 23:59:44.962715 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 13 23:59:44.962728 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:59:44.962738 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 23:59:44.962749 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:59:44.962759 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:59:44.962770 kernel: audit: initializing netlink subsys (disabled) May 13 23:59:44.962780 kernel: audit: type=2000 audit(1747180784.215:1): state=initialized audit_enabled=0 res=1 May 13 23:59:44.962795 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:59:44.962806 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 23:59:44.962816 kernel: cpuidle: using governor menu May 13 23:59:44.962827 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:59:44.962837 kernel: dca service started, version 1.12.1 May 13 23:59:44.962847 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 13 23:59:44.962857 kernel: PCI: Using configuration type 1 for base access May 13 23:59:44.962868 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 23:59:44.962879 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:59:44.962893 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:59:44.962903 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:59:44.962914 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:59:44.962924 kernel: ACPI: Added _OSI(Module Device) May 13 23:59:44.962934 kernel: ACPI: Added _OSI(Processor Device) May 13 23:59:44.962945 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:59:44.962955 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:59:44.962965 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:59:44.962976 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 23:59:44.962990 kernel: ACPI: Interpreter enabled May 13 23:59:44.963001 kernel: ACPI: PM: (supports S0 S3 S5) May 13 23:59:44.963011 kernel: ACPI: Using IOAPIC for interrupt routing May 13 23:59:44.963022 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 23:59:44.963032 kernel: PCI: Using E820 reservations for host bridge windows May 13 23:59:44.963043 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 23:59:44.963053 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:59:44.963336 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:59:44.963516 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 23:59:44.963705 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 23:59:44.963722 kernel: PCI host bridge to bus 0000:00 May 13 23:59:44.963895 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 23:59:44.964048 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 23:59:44.964204 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 23:59:44.964347 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 13 23:59:44.964497 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 13 23:59:44.964647 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 13 23:59:44.964825 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:59:44.965006 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 23:59:44.965193 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 23:59:44.965358 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 13 23:59:44.965526 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 13 23:59:44.965712 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 13 23:59:44.965873 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 13 23:59:44.966032 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 23:59:44.966215 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 23:59:44.966381 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 13 23:59:44.966541 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 13 23:59:44.966734 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 13 23:59:44.966910 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 23:59:44.967073 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 13 23:59:44.967248 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 13 23:59:44.967416 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 13 23:59:44.967597 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 23:59:44.967798 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 13 23:59:44.967975 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 13 23:59:44.968145 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 13 23:59:44.968320 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 13 23:59:44.968489 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 23:59:44.968660 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 23:59:44.968935 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 23:59:44.969099 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 13 23:59:44.969308 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 13 23:59:44.969477 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 23:59:44.969634 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 13 23:59:44.969649 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 23:59:44.969661 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 23:59:44.969671 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 23:59:44.969699 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 23:59:44.969716 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 23:59:44.969726 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 23:59:44.969737 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 23:59:44.969747 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 23:59:44.969758 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 23:59:44.969768 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 23:59:44.969779 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 23:59:44.969789 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 23:59:44.969800 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 23:59:44.969814 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 23:59:44.969825 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 23:59:44.969836 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 23:59:44.969846 kernel: iommu: Default domain type: Translated May 13 23:59:44.969857 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 23:59:44.969867 kernel: efivars: Registered efivars operations May 13 23:59:44.969878 kernel: PCI: Using ACPI for IRQ routing May 13 23:59:44.969889 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 23:59:44.969900 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 13 23:59:44.969910 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 13 23:59:44.969924 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 13 23:59:44.969934 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 13 23:59:44.969945 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 13 23:59:44.969955 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 13 23:59:44.969966 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 13 23:59:44.969976 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 13 23:59:44.970138 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 23:59:44.970306 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 23:59:44.970469 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 23:59:44.970484 kernel: vgaarb: loaded May 13 23:59:44.970495 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 23:59:44.970505 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 23:59:44.970516 kernel: clocksource: Switched to clocksource kvm-clock May 13 23:59:44.970526 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:59:44.970537 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:59:44.970547 kernel: pnp: PnP ACPI init May 13 23:59:44.970808 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 13 23:59:44.970830 kernel: pnp: PnP ACPI: found 6 devices May 13 23:59:44.970841 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 23:59:44.970852 kernel: NET: Registered PF_INET protocol family May 13 23:59:44.970862 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:59:44.970894 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:59:44.970908 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:59:44.970919 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:59:44.970930 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:59:44.970944 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:59:44.970954 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:59:44.970969 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:59:44.970980 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:59:44.970991 kernel: NET: Registered PF_XDP protocol family May 13 23:59:44.971155 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 13 23:59:44.971325 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 13 23:59:44.973037 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 23:59:44.973218 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 23:59:44.973371 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 23:59:44.973526 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 13 23:59:44.973700 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 13 23:59:44.973855 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 13 23:59:44.973871 kernel: PCI: CLS 0 bytes, default 64 May 13 23:59:44.973883 kernel: Initialise system trusted keyrings May 13 23:59:44.973895 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:59:44.973912 kernel: Key type asymmetric registered May 13 23:59:44.973923 kernel: Asymmetric key parser 'x509' registered May 13 23:59:44.973934 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 23:59:44.973945 kernel: io scheduler mq-deadline registered May 13 23:59:44.973956 kernel: io scheduler kyber registered May 13 23:59:44.973968 kernel: io scheduler bfq registered May 13 23:59:44.973979 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 23:59:44.973992 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 23:59:44.974003 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 23:59:44.974018 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 23:59:44.974033 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:59:44.974045 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 23:59:44.974056 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 23:59:44.974067 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 23:59:44.974079 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 23:59:44.974094 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 23:59:44.974284 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 23:59:44.974439 kernel: rtc_cmos 00:04: registered as rtc0 May 13 23:59:44.974582 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T23:59:44 UTC (1747180784) May 13 23:59:44.974723 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 23:59:44.974735 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 23:59:44.974744 kernel: efifb: probing for efifb May 13 23:59:44.974752 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 13 23:59:44.974766 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 13 23:59:44.974774 kernel: efifb: scrolling: redraw May 13 23:59:44.974782 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 23:59:44.974790 kernel: Console: switching to colour frame buffer device 160x50 May 13 23:59:44.974798 kernel: fb0: EFI VGA frame buffer device May 13 23:59:44.974807 kernel: pstore: Using crash dump compression: deflate May 13 23:59:44.974816 kernel: pstore: Registered efi_pstore as persistent store backend May 13 23:59:44.974823 kernel: NET: Registered PF_INET6 protocol family May 13 23:59:44.974832 kernel: Segment Routing with IPv6 May 13 23:59:44.974842 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:59:44.974850 kernel: NET: Registered PF_PACKET protocol family May 13 23:59:44.974859 kernel: Key type dns_resolver registered May 13 23:59:44.974867 kernel: IPI shorthand broadcast: enabled May 13 23:59:44.974875 kernel: sched_clock: Marking stable (588002543, 154339195)->(796897351, -54555613) May 13 23:59:44.974883 kernel: registered taskstats version 1 May 13 23:59:44.974891 kernel: Loading compiled-in X.509 certificates May 13 23:59:44.974900 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 50ddd1b04864f80ac4ca221f8647fbbda919e0fd' May 13 23:59:44.974908 kernel: Key type .fscrypt registered May 13 23:59:44.974918 kernel: Key type fscrypt-provisioning registered May 13 23:59:44.974926 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:59:44.974934 kernel: ima: Allocated hash algorithm: sha1 May 13 23:59:44.974943 kernel: ima: No architecture policies found May 13 23:59:44.974953 kernel: clk: Disabling unused clocks May 13 23:59:44.974961 kernel: Freeing unused kernel image (initmem) memory: 43480K May 13 23:59:44.974969 kernel: Write protecting the kernel read-only data: 38912k May 13 23:59:44.974978 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 13 23:59:44.974986 kernel: Run /init as init process May 13 23:59:44.974997 kernel: with arguments: May 13 23:59:44.975005 kernel: /init May 13 23:59:44.975013 kernel: with environment: May 13 23:59:44.975021 kernel: HOME=/ May 13 23:59:44.975029 kernel: TERM=linux May 13 23:59:44.975037 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:59:44.975046 systemd[1]: Successfully made /usr/ read-only. May 13 23:59:44.975061 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:59:44.975077 systemd[1]: Detected virtualization kvm. May 13 23:59:44.975088 systemd[1]: Detected architecture x86-64. May 13 23:59:44.975099 systemd[1]: Running in initrd. May 13 23:59:44.975110 systemd[1]: No hostname configured, using default hostname. May 13 23:59:44.975120 systemd[1]: Hostname set to . May 13 23:59:44.975128 systemd[1]: Initializing machine ID from VM UUID. May 13 23:59:44.975137 systemd[1]: Queued start job for default target initrd.target. May 13 23:59:44.975145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:59:44.975157 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:59:44.975167 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:59:44.975176 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:59:44.975194 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:59:44.975203 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:59:44.975214 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:59:44.975226 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:59:44.975235 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:59:44.975244 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:59:44.975253 systemd[1]: Reached target paths.target - Path Units. May 13 23:59:44.975261 systemd[1]: Reached target slices.target - Slice Units. May 13 23:59:44.975270 systemd[1]: Reached target swap.target - Swaps. May 13 23:59:44.975278 systemd[1]: Reached target timers.target - Timer Units. May 13 23:59:44.975287 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:59:44.975296 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:59:44.975307 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:59:44.975317 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:59:44.975325 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:59:44.975334 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:59:44.975343 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:59:44.975351 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:59:44.975360 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:59:44.975368 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:59:44.975377 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:59:44.975388 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:59:44.975397 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:59:44.975406 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:59:44.975415 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:59:44.975423 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:59:44.975432 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:59:44.975447 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:59:44.975459 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:59:44.975471 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:59:44.975483 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:59:44.975496 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:59:44.975507 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:59:44.975516 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:59:44.975561 systemd-journald[192]: Collecting audit messages is disabled. May 13 23:59:44.975583 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:59:44.975593 systemd-journald[192]: Journal started May 13 23:59:44.975614 systemd-journald[192]: Runtime Journal (/run/log/journal/5a1f639ab94144afbf139aa41b837442) is 6M, max 48.2M, 42.2M free. May 13 23:59:44.984849 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:59:44.946851 systemd-modules-load[195]: Inserted module 'overlay' May 13 23:59:44.990786 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:59:44.990811 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:59:44.991691 kernel: Bridge firewalling registered May 13 23:59:44.992585 systemd-modules-load[195]: Inserted module 'br_netfilter' May 13 23:59:44.994533 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:59:45.000354 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:59:45.002859 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:59:45.005848 dracut-cmdline[215]: dracut-dracut-053 May 13 23:59:45.009665 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9290c9b76db63811f0d205969a93d9b54c3ea10aed4e7b51abfb58e812a25e51 May 13 23:59:45.016582 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:59:45.019070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:59:45.029901 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:59:45.073763 systemd-resolved[253]: Positive Trust Anchors: May 13 23:59:45.073779 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:59:45.073822 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:59:45.077008 systemd-resolved[253]: Defaulting to hostname 'linux'. May 13 23:59:45.078296 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:59:45.083896 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:59:45.120706 kernel: SCSI subsystem initialized May 13 23:59:45.129704 kernel: Loading iSCSI transport class v2.0-870. May 13 23:59:45.140701 kernel: iscsi: registered transport (tcp) May 13 23:59:45.162706 kernel: iscsi: registered transport (qla4xxx) May 13 23:59:45.162732 kernel: QLogic iSCSI HBA Driver May 13 23:59:45.216002 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:59:45.224837 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:59:45.251452 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:59:45.251539 kernel: device-mapper: uevent: version 1.0.3 May 13 23:59:45.251552 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:59:45.293725 kernel: raid6: avx2x4 gen() 29222 MB/s May 13 23:59:45.310701 kernel: raid6: avx2x2 gen() 29843 MB/s May 13 23:59:45.327812 kernel: raid6: avx2x1 gen() 25397 MB/s May 13 23:59:45.327837 kernel: raid6: using algorithm avx2x2 gen() 29843 MB/s May 13 23:59:45.345810 kernel: raid6: .... xor() 19500 MB/s, rmw enabled May 13 23:59:45.345837 kernel: raid6: using avx2x2 recovery algorithm May 13 23:59:45.366701 kernel: xor: automatically using best checksumming function avx May 13 23:59:45.515707 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:59:45.530439 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:59:45.550933 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:59:45.573135 systemd-udevd[416]: Using default interface naming scheme 'v255'. May 13 23:59:45.581750 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:59:45.587811 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:59:45.603150 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation May 13 23:59:45.641164 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:59:45.654986 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:59:45.723306 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:59:45.731819 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:59:45.746797 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:59:45.750327 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:59:45.750418 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:59:45.754604 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:59:45.768720 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 13 23:59:45.766532 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:59:45.772906 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 23:59:45.778160 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:59:45.778190 kernel: GPT:9289727 != 19775487 May 13 23:59:45.778201 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:59:45.778211 kernel: GPT:9289727 != 19775487 May 13 23:59:45.778227 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:59:45.778237 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:59:45.779418 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:59:45.788703 kernel: cryptd: max_cpu_qlen set to 1000 May 13 23:59:45.802697 kernel: libata version 3.00 loaded. May 13 23:59:45.810690 kernel: ahci 0000:00:1f.2: version 3.0 May 13 23:59:45.810710 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:59:45.812707 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 23:59:45.810973 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:59:45.816475 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 23:59:45.816713 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 23:59:45.818202 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:59:45.819432 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:59:45.819859 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:59:45.830013 kernel: AVX2 version of gcm_enc/dec engaged. May 13 23:59:45.830034 kernel: scsi host0: ahci May 13 23:59:45.830272 kernel: AES CTR mode by8 optimization enabled May 13 23:59:45.828388 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:59:45.836880 kernel: BTRFS: device fsid 87997324-54dc-4f74-bc1a-3f18f5f2e9f7 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (466) May 13 23:59:45.836899 kernel: scsi host1: ahci May 13 23:59:45.838695 kernel: scsi host2: ahci May 13 23:59:45.839694 kernel: scsi host3: ahci May 13 23:59:45.841726 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (476) May 13 23:59:45.841752 kernel: scsi host4: ahci May 13 23:59:45.842397 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:59:45.846524 kernel: scsi host5: ahci May 13 23:59:45.848955 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 13 23:59:45.848979 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 13 23:59:45.848990 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 13 23:59:45.850698 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 13 23:59:45.850719 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 13 23:59:45.850731 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 13 23:59:45.885039 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:59:45.899234 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:59:45.909694 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:59:45.911263 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:59:45.921749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:59:45.936867 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:59:45.938348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:59:45.938431 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:59:45.941865 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:59:45.948842 disk-uuid[559]: Primary Header is updated. May 13 23:59:45.948842 disk-uuid[559]: Secondary Entries is updated. May 13 23:59:45.948842 disk-uuid[559]: Secondary Header is updated. May 13 23:59:45.953006 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:59:45.945701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:59:45.948137 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:59:45.957715 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:59:45.966693 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:59:45.979967 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:59:46.013547 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:59:46.156710 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 23:59:46.156798 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 23:59:46.157707 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 23:59:46.158788 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 23:59:46.158802 kernel: ata3.00: applying bridge limits May 13 23:59:46.159905 kernel: ata3.00: configured for UDMA/100 May 13 23:59:46.160698 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 23:59:46.165695 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 23:59:46.165713 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 23:59:46.166697 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 23:59:46.211715 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 23:59:46.212033 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 23:59:46.225704 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 23:59:46.957725 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:59:46.957966 disk-uuid[561]: The operation has completed successfully. May 13 23:59:46.984526 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:59:46.984642 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:59:47.040842 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:59:47.044592 sh[600]: Success May 13 23:59:47.058814 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 23:59:47.092411 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:59:47.103861 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:59:47.106239 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:59:47.120226 kernel: BTRFS info (device dm-0): first mount of filesystem 87997324-54dc-4f74-bc1a-3f18f5f2e9f7 May 13 23:59:47.120261 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 23:59:47.120279 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:59:47.122454 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:59:47.122474 kernel: BTRFS info (device dm-0): using free space tree May 13 23:59:47.128329 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:59:47.131221 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:59:47.143870 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:59:47.147148 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:59:47.161379 kernel: BTRFS info (device vda6): first mount of filesystem 889b472b-dd66-499b-aa0d-db984ba9faf7 May 13 23:59:47.161407 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:59:47.161418 kernel: BTRFS info (device vda6): using free space tree May 13 23:59:47.164703 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:59:47.169701 kernel: BTRFS info (device vda6): last unmount of filesystem 889b472b-dd66-499b-aa0d-db984ba9faf7 May 13 23:59:47.176029 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:59:47.182856 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:59:47.244326 ignition[687]: Ignition 2.20.0 May 13 23:59:47.244338 ignition[687]: Stage: fetch-offline May 13 23:59:47.244382 ignition[687]: no configs at "/usr/lib/ignition/base.d" May 13 23:59:47.244391 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:59:47.244851 ignition[687]: parsed url from cmdline: "" May 13 23:59:47.244856 ignition[687]: no config URL provided May 13 23:59:47.244862 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:59:47.244872 ignition[687]: no config at "/usr/lib/ignition/user.ign" May 13 23:59:47.244898 ignition[687]: op(1): [started] loading QEMU firmware config module May 13 23:59:47.244904 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 23:59:47.254542 ignition[687]: op(1): [finished] loading QEMU firmware config module May 13 23:59:47.275138 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:59:47.288803 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:59:47.299517 ignition[687]: parsing config with SHA512: bea944a741b19598824ccf174acfb2717490dbbe45421a588ed1bab2e09e27f36477d01dcfa90a0d67834ff6408c7ac818757ce47f4934236c2de89cb1ac7355 May 13 23:59:47.303933 unknown[687]: fetched base config from "system" May 13 23:59:47.304108 unknown[687]: fetched user config from "qemu" May 13 23:59:47.305352 ignition[687]: fetch-offline: fetch-offline passed May 13 23:59:47.305452 ignition[687]: Ignition finished successfully May 13 23:59:47.309895 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:59:47.316269 systemd-networkd[785]: lo: Link UP May 13 23:59:47.316281 systemd-networkd[785]: lo: Gained carrier May 13 23:59:47.317926 systemd-networkd[785]: Enumeration completed May 13 23:59:47.318029 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:59:47.318277 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:59:47.318282 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:59:47.319268 systemd-networkd[785]: eth0: Link UP May 13 23:59:47.319271 systemd-networkd[785]: eth0: Gained carrier May 13 23:59:47.319278 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:59:47.320373 systemd[1]: Reached target network.target - Network. May 13 23:59:47.322230 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:59:47.328800 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:59:47.332750 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:59:47.348035 ignition[790]: Ignition 2.20.0 May 13 23:59:47.348046 ignition[790]: Stage: kargs May 13 23:59:47.348218 ignition[790]: no configs at "/usr/lib/ignition/base.d" May 13 23:59:47.348230 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:59:47.349030 ignition[790]: kargs: kargs passed May 13 23:59:47.352521 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:59:47.349073 ignition[790]: Ignition finished successfully May 13 23:59:47.357930 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:59:47.370764 ignition[800]: Ignition 2.20.0 May 13 23:59:47.370779 ignition[800]: Stage: disks May 13 23:59:47.370985 ignition[800]: no configs at "/usr/lib/ignition/base.d" May 13 23:59:47.370999 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:59:47.372116 ignition[800]: disks: disks passed May 13 23:59:47.374415 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:59:47.372180 ignition[800]: Ignition finished successfully May 13 23:59:47.376306 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:59:47.378530 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:59:47.380801 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:59:47.380869 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:59:47.381264 systemd[1]: Reached target basic.target - Basic System. May 13 23:59:47.390897 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:59:47.405084 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:59:47.413231 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:59:47.423915 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:59:47.531710 kernel: EXT4-fs (vda9): mounted filesystem cf173df9-f79a-4e29-be52-c2936b0d4e57 r/w with ordered data mode. Quota mode: none. May 13 23:59:47.532881 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:59:47.534495 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:59:47.549809 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:59:47.551082 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:59:47.552741 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:59:47.552798 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:59:47.552831 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:59:47.560148 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:59:47.563207 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:59:47.569709 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (818) May 13 23:59:47.569778 kernel: BTRFS info (device vda6): first mount of filesystem 889b472b-dd66-499b-aa0d-db984ba9faf7 May 13 23:59:47.572003 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:59:47.572026 kernel: BTRFS info (device vda6): using free space tree May 13 23:59:47.575701 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:59:47.577097 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:59:47.600622 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:59:47.605775 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory May 13 23:59:47.610999 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:59:47.615962 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:59:47.709908 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:59:47.717854 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:59:47.718908 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:59:47.729696 kernel: BTRFS info (device vda6): last unmount of filesystem 889b472b-dd66-499b-aa0d-db984ba9faf7 May 13 23:59:47.743255 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:59:47.751368 ignition[932]: INFO : Ignition 2.20.0 May 13 23:59:47.751368 ignition[932]: INFO : Stage: mount May 13 23:59:47.753254 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:59:47.753254 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:59:47.753254 ignition[932]: INFO : mount: mount passed May 13 23:59:47.753254 ignition[932]: INFO : Ignition finished successfully May 13 23:59:47.755170 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:59:47.768781 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:59:48.119252 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:59:48.131844 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:59:48.139154 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (944) May 13 23:59:48.139184 kernel: BTRFS info (device vda6): first mount of filesystem 889b472b-dd66-499b-aa0d-db984ba9faf7 May 13 23:59:48.139196 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:59:48.140691 kernel: BTRFS info (device vda6): using free space tree May 13 23:59:48.143695 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:59:48.144692 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:59:48.173403 ignition[961]: INFO : Ignition 2.20.0 May 13 23:59:48.173403 ignition[961]: INFO : Stage: files May 13 23:59:48.175429 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:59:48.175429 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:59:48.175429 ignition[961]: DEBUG : files: compiled without relabeling support, skipping May 13 23:59:48.175429 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:59:48.175429 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:59:48.182153 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:59:48.182153 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:59:48.182153 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:59:48.182153 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:59:48.182153 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 23:59:48.177748 unknown[961]: wrote ssh authorized keys file for user: core May 13 23:59:48.216284 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:59:48.328766 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:59:48.328766 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:59:48.332610 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:59:48.332610 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:59:48.332610 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:59:48.332610 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:59:48.332610 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:59:48.332610 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:59:48.332610 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:59:48.332610 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:59:48.346569 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:59:48.346569 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:59:48.346569 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:59:48.346569 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:59:48.346569 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 23:59:48.812436 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:59:49.196964 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:59:49.196964 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:59:49.201094 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:59:49.201094 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:59:49.201094 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:59:49.201094 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 23:59:49.201094 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:59:49.201094 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:59:49.201094 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 23:59:49.201094 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 23:59:49.226707 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:59:49.231586 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:59:49.233666 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 23:59:49.233666 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 23:59:49.233666 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:59:49.233666 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:59:49.233666 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:59:49.233666 ignition[961]: INFO : files: files passed May 13 23:59:49.233666 ignition[961]: INFO : Ignition finished successfully May 13 23:59:49.245329 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:59:49.252931 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:59:49.255049 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:59:49.257051 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:59:49.257178 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:59:49.265094 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory May 13 23:59:49.268845 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:59:49.268845 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:59:49.273626 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:59:49.271539 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:59:49.273904 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:59:49.284826 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:59:49.308997 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:59:49.309132 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:59:49.311533 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:59:49.313832 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:59:49.315940 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:59:49.327925 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:59:49.341530 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:59:49.345499 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:59:49.360392 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:59:49.363041 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:59:49.365734 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:59:49.367739 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:59:49.368884 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:59:49.371838 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:59:49.374038 systemd[1]: Stopped target basic.target - Basic System. May 13 23:59:49.375971 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:59:49.378172 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:59:49.380507 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:59:49.381847 systemd-networkd[785]: eth0: Gained IPv6LL May 13 23:59:49.382937 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:59:49.384807 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:59:49.388266 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:59:49.390369 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:59:49.392481 systemd[1]: Stopped target swap.target - Swaps. May 13 23:59:49.394174 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:59:49.395214 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:59:49.397527 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:59:49.399744 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:59:49.402130 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:59:49.403074 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:59:49.405700 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:59:49.406721 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:59:49.409007 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:59:49.410100 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:59:49.412545 systemd[1]: Stopped target paths.target - Path Units. May 13 23:59:49.414352 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:59:49.418768 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:59:49.422212 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:59:49.424485 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:59:49.426961 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:59:49.428079 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:59:49.430553 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:59:49.431664 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:59:49.434264 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:59:49.435773 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:59:49.438968 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:59:49.440226 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:59:49.454920 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:59:49.457586 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:59:49.457805 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:59:49.463131 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:59:49.465437 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:59:49.466817 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:59:49.469838 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:59:49.471190 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:59:49.473796 ignition[1016]: INFO : Ignition 2.20.0 May 13 23:59:49.473796 ignition[1016]: INFO : Stage: umount May 13 23:59:49.473796 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:59:49.473796 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:59:49.473796 ignition[1016]: INFO : umount: umount passed May 13 23:59:49.473796 ignition[1016]: INFO : Ignition finished successfully May 13 23:59:49.482892 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:59:49.484014 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:59:49.489958 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:59:49.491522 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:59:49.492559 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:59:49.495621 systemd[1]: Stopped target network.target - Network. May 13 23:59:49.497378 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:59:49.497448 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:59:49.500442 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:59:49.500528 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:59:49.501641 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:59:49.501711 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:59:49.502697 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:59:49.502745 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:59:49.505933 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:59:49.512736 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:59:49.516119 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:59:49.517300 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:59:49.521811 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:59:49.523271 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:59:49.524340 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:59:49.528113 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:59:49.530759 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:59:49.530838 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:59:49.548947 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:59:49.549117 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:59:49.549220 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:59:49.552289 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:59:49.552372 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:59:49.555737 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:59:49.555801 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:59:49.558688 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:59:49.558746 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:59:49.562931 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:59:49.568278 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:59:49.568367 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:59:49.580073 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:59:49.580318 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:59:49.582525 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:59:49.582607 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:59:49.585621 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:59:49.585666 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:59:49.588032 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:59:49.588094 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:59:49.592730 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:59:49.592795 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:59:49.597073 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:59:49.597158 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:59:49.602585 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:59:49.603745 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:59:49.603813 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:59:49.607120 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:59:49.607178 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:59:49.612883 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:59:49.612977 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:59:49.613448 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:59:49.613587 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:59:49.616211 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:59:49.616329 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:59:49.759530 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:59:49.759704 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:59:49.762228 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:59:49.764394 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:59:49.764452 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:59:49.778924 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:59:49.786620 systemd[1]: Switching root. May 13 23:59:49.820137 systemd-journald[192]: Journal stopped May 13 23:59:51.260758 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). May 13 23:59:51.260825 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:59:51.260839 kernel: SELinux: policy capability open_perms=1 May 13 23:59:51.260854 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:59:51.260866 kernel: SELinux: policy capability always_check_network=0 May 13 23:59:51.260877 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:59:51.260889 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:59:51.260900 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:59:51.260911 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:59:51.260929 kernel: audit: type=1403 audit(1747180790.269:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:59:51.260941 systemd[1]: Successfully loaded SELinux policy in 39.644ms. May 13 23:59:51.260968 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.025ms. May 13 23:59:51.260983 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:59:51.260995 systemd[1]: Detected virtualization kvm. May 13 23:59:51.261011 systemd[1]: Detected architecture x86-64. May 13 23:59:51.261028 systemd[1]: Detected first boot. May 13 23:59:51.261053 systemd[1]: Initializing machine ID from VM UUID. May 13 23:59:51.261066 zram_generator::config[1063]: No configuration found. May 13 23:59:51.261081 kernel: Guest personality initialized and is inactive May 13 23:59:51.261093 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 23:59:51.261113 kernel: Initialized host personality May 13 23:59:51.261130 kernel: NET: Registered PF_VSOCK protocol family May 13 23:59:51.261142 systemd[1]: Populated /etc with preset unit settings. May 13 23:59:51.261154 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:59:51.261166 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:59:51.261179 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:59:51.261191 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:59:51.261203 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:59:51.261215 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:59:51.261230 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:59:51.261243 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:59:51.261255 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:59:51.261267 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:59:51.261280 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:59:51.261292 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:59:51.261305 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:59:51.261317 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:59:51.261332 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:59:51.261344 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:59:51.261358 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:59:51.261371 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:59:51.261383 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 23:59:51.261395 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:59:51.261407 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:59:51.261422 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:59:51.261437 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:59:51.261450 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:59:51.261462 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:59:51.261474 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:59:51.261487 systemd[1]: Reached target slices.target - Slice Units. May 13 23:59:51.261499 systemd[1]: Reached target swap.target - Swaps. May 13 23:59:51.261511 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:59:51.261523 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:59:51.261536 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:59:51.261550 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:59:51.261563 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:59:51.261575 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:59:51.261587 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:59:51.261600 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:59:51.261612 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:59:51.261625 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:59:51.261637 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:59:51.261649 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:59:51.261664 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:59:51.261695 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:59:51.261714 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:59:51.261730 systemd[1]: Reached target machines.target - Containers. May 13 23:59:51.261744 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:59:51.261756 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:59:51.261769 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:59:51.261781 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:59:51.261797 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:59:51.261810 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:59:51.261822 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:59:51.261834 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:59:51.261846 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:59:51.261859 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:59:51.261871 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:59:51.261888 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:59:51.261900 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:59:51.261914 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:59:51.261928 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:59:51.261941 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:59:51.261953 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:59:51.261967 kernel: loop: module loaded May 13 23:59:51.261979 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:59:51.261991 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:59:51.262004 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:59:51.262016 kernel: fuse: init (API version 7.39) May 13 23:59:51.262027 kernel: ACPI: bus type drm_connector registered May 13 23:59:51.262049 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:59:51.262062 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:59:51.262074 systemd[1]: Stopped verity-setup.service. May 13 23:59:51.262090 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:59:51.262102 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:59:51.262114 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:59:51.262129 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:59:51.262161 systemd-journald[1138]: Collecting audit messages is disabled. May 13 23:59:51.262184 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:59:51.262197 systemd-journald[1138]: Journal started May 13 23:59:51.262222 systemd-journald[1138]: Runtime Journal (/run/log/journal/5a1f639ab94144afbf139aa41b837442) is 6M, max 48.2M, 42.2M free. May 13 23:59:50.987284 systemd[1]: Queued start job for default target multi-user.target. May 13 23:59:51.002275 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:59:51.002873 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:59:51.264707 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:59:51.265810 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:59:51.267344 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:59:51.268971 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:59:51.270968 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:59:51.271260 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:59:51.273127 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:59:51.273401 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:59:51.275182 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:59:51.275453 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:59:51.277356 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:59:51.277620 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:59:51.279782 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:59:51.280058 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:59:51.282647 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:59:51.282932 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:59:51.284886 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:59:51.286906 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:59:51.289096 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:59:51.291210 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:59:51.293427 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:59:51.313052 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:59:51.324878 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:59:51.328173 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:59:51.329564 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:59:51.329600 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:59:51.332230 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:59:51.340582 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:59:51.343954 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:59:51.345292 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:59:51.349343 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:59:51.352139 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:59:51.353529 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:59:51.355010 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:59:51.356241 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:59:51.358627 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:59:51.362375 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:59:51.366062 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:59:51.375992 systemd-journald[1138]: Time spent on flushing to /var/log/journal/5a1f639ab94144afbf139aa41b837442 is 21.619ms for 1056 entries. May 13 23:59:51.375992 systemd-journald[1138]: System Journal (/var/log/journal/5a1f639ab94144afbf139aa41b837442) is 8M, max 195.6M, 187.6M free. May 13 23:59:51.417556 systemd-journald[1138]: Received client request to flush runtime journal. May 13 23:59:51.417597 kernel: loop0: detected capacity change from 0 to 138176 May 13 23:59:51.371989 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:59:51.374442 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:59:51.378099 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:59:51.380897 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:59:51.404188 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:59:51.406169 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:59:51.409226 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:59:51.413345 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:59:51.420480 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:59:51.422610 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:59:51.426010 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 23:59:51.455953 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:59:51.463885 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:59:51.469699 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:59:51.485886 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. May 13 23:59:51.485907 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. May 13 23:59:51.492202 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:59:51.494358 kernel: loop1: detected capacity change from 0 to 147912 May 13 23:59:51.574227 kernel: loop2: detected capacity change from 0 to 210664 May 13 23:59:51.619702 kernel: loop3: detected capacity change from 0 to 138176 May 13 23:59:51.654713 kernel: loop4: detected capacity change from 0 to 147912 May 13 23:59:51.666706 kernel: loop5: detected capacity change from 0 to 210664 May 13 23:59:51.711391 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 23:59:51.712135 (sd-merge)[1207]: Merged extensions into '/usr'. May 13 23:59:51.719220 systemd[1]: Reload requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:59:51.719244 systemd[1]: Reloading... May 13 23:59:51.788521 zram_generator::config[1236]: No configuration found. May 13 23:59:51.861736 ldconfig[1178]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:59:51.941358 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:59:52.009306 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:59:52.009708 systemd[1]: Reloading finished in 289 ms. May 13 23:59:52.031832 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:59:52.033840 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:59:52.035878 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:59:52.057035 systemd[1]: Starting ensure-sysext.service... May 13 23:59:52.059766 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:59:52.075711 systemd[1]: Reload requested from client PID 1274 ('systemctl') (unit ensure-sysext.service)... May 13 23:59:52.075734 systemd[1]: Reloading... May 13 23:59:52.084808 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:59:52.085198 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:59:52.086202 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:59:52.086573 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. May 13 23:59:52.086773 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. May 13 23:59:52.091745 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:59:52.091848 systemd-tmpfiles[1275]: Skipping /boot May 13 23:59:52.108575 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:59:52.108592 systemd-tmpfiles[1275]: Skipping /boot May 13 23:59:52.150704 zram_generator::config[1307]: No configuration found. May 13 23:59:52.290496 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:59:52.372881 systemd[1]: Reloading finished in 296 ms. May 13 23:59:52.387873 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:59:52.407835 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:59:52.427099 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:59:52.430042 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:59:52.432587 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:59:52.437600 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:59:52.442924 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:59:52.445701 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:59:52.449962 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:59:52.450190 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:59:52.451921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:59:52.456935 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:59:52.462081 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:59:52.463451 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:59:52.463561 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:59:52.467757 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:59:52.469067 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:59:52.470459 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:59:52.470746 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:59:52.472496 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:59:52.472727 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:59:52.475080 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:59:52.475278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:59:52.480635 augenrules[1372]: No rules May 13 23:59:52.484153 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:59:52.484429 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:59:52.486151 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:59:52.489421 systemd-udevd[1353]: Using default interface naming scheme 'v255'. May 13 23:59:52.498632 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:59:52.504760 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:59:52.511005 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:59:52.512833 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:59:52.516896 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:59:52.519233 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:59:52.523417 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:59:52.531885 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:59:52.533393 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:59:52.534849 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:59:52.536655 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:59:52.537765 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:59:52.539161 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:59:52.542985 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:59:52.550462 augenrules[1383]: /sbin/augenrules: No change May 13 23:59:52.545336 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:59:52.546716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:59:52.548592 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:59:52.548885 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:59:52.552436 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:59:52.552648 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:59:52.554590 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:59:52.554830 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:59:52.556315 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:59:52.558564 augenrules[1428]: No rules May 13 23:59:52.562269 systemd[1]: Finished ensure-sysext.service. May 13 23:59:52.563643 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:59:52.564061 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:59:52.565563 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:59:52.598997 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:59:52.600168 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:59:52.600240 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:59:52.603551 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:59:52.604737 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:59:52.625703 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1410) May 13 23:59:52.639500 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 23:59:52.674086 systemd-resolved[1350]: Positive Trust Anchors: May 13 23:59:52.674113 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:59:52.674144 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:59:52.682246 systemd-resolved[1350]: Defaulting to hostname 'linux'. May 13 23:59:52.684190 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:59:52.689961 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:59:52.691774 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:59:52.695262 systemd-networkd[1443]: lo: Link UP May 13 23:59:52.695274 systemd-networkd[1443]: lo: Gained carrier May 13 23:59:52.701698 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 23:59:52.697277 systemd-networkd[1443]: Enumeration completed May 13 23:59:52.701984 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:59:52.703066 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:59:52.703078 systemd-networkd[1443]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:59:52.703424 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:59:52.704315 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:59:52.704346 systemd-networkd[1443]: eth0: Link UP May 13 23:59:52.704350 systemd-networkd[1443]: eth0: Gained carrier May 13 23:59:52.704361 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:59:52.709355 systemd[1]: Reached target network.target - Network. May 13 23:59:52.710200 kernel: ACPI: button: Power Button [PWRF] May 13 23:59:52.713725 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:59:52.716090 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:59:52.721519 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 23:59:52.724615 systemd-networkd[1443]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:59:52.732600 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:59:52.736115 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 13 23:59:52.740098 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 23:59:52.740338 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 23:59:52.740593 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 23:59:52.741590 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:59:52.748086 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:59:52.749940 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:59:52.750074 systemd-timesyncd[1444]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 23:59:52.750385 systemd-timesyncd[1444]: Initial clock synchronization to Tue 2025-05-13 23:59:53.118279 UTC. May 13 23:59:52.773761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:59:52.785317 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:59:52.785630 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:59:52.844098 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:59:52.860782 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:59:52.874185 kernel: kvm_amd: TSC scaling supported May 13 23:59:52.874231 kernel: kvm_amd: Nested Virtualization enabled May 13 23:59:52.874279 kernel: kvm_amd: Nested Paging enabled May 13 23:59:52.874299 kernel: kvm_amd: LBR virtualization supported May 13 23:59:52.874913 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 13 23:59:52.875978 kernel: kvm_amd: Virtual GIF supported May 13 23:59:52.897726 kernel: EDAC MC: Ver: 3.0.0 May 13 23:59:52.911507 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:59:52.929036 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:59:52.940122 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:59:52.949868 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:59:52.981921 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:59:52.983507 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:59:52.984625 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:59:52.985791 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:59:52.987053 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:59:52.988492 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:59:52.989763 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:59:52.991050 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:59:52.992304 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:59:52.992330 systemd[1]: Reached target paths.target - Path Units. May 13 23:59:52.993244 systemd[1]: Reached target timers.target - Timer Units. May 13 23:59:52.995020 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:59:52.997822 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:59:53.001416 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:59:53.002921 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:59:53.004277 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:59:53.008982 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:59:53.010787 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:59:53.013330 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:59:53.015115 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:59:53.016342 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:59:53.017353 systemd[1]: Reached target basic.target - Basic System. May 13 23:59:53.018398 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:59:53.018433 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:59:53.019564 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:59:53.021869 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:59:53.024827 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:59:53.025931 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:59:53.030318 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:59:53.031902 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:59:53.034938 jq[1483]: false May 13 23:59:53.035948 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:59:53.045111 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:59:53.049788 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:59:53.053363 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:59:53.059197 extend-filesystems[1484]: Found loop3 May 13 23:59:53.060570 extend-filesystems[1484]: Found loop4 May 13 23:59:53.060570 extend-filesystems[1484]: Found loop5 May 13 23:59:53.060570 extend-filesystems[1484]: Found sr0 May 13 23:59:53.060570 extend-filesystems[1484]: Found vda May 13 23:59:53.060570 extend-filesystems[1484]: Found vda1 May 13 23:59:53.060570 extend-filesystems[1484]: Found vda2 May 13 23:59:53.060570 extend-filesystems[1484]: Found vda3 May 13 23:59:53.060570 extend-filesystems[1484]: Found usr May 13 23:59:53.060570 extend-filesystems[1484]: Found vda4 May 13 23:59:53.060570 extend-filesystems[1484]: Found vda6 May 13 23:59:53.060570 extend-filesystems[1484]: Found vda7 May 13 23:59:53.060570 extend-filesystems[1484]: Found vda9 May 13 23:59:53.060570 extend-filesystems[1484]: Checking size of /dev/vda9 May 13 23:59:53.064970 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:59:53.060608 dbus-daemon[1482]: [system] SELinux support is enabled May 13 23:59:53.066651 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:59:53.067448 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:59:53.078166 extend-filesystems[1484]: Resized partition /dev/vda9 May 13 23:59:53.078976 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:59:53.081507 extend-filesystems[1503]: resize2fs 1.47.1 (20-May-2024) May 13 23:59:53.089790 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 23:59:53.088511 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:59:53.091144 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:59:53.093394 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1408) May 13 23:59:53.096266 jq[1504]: true May 13 23:59:53.098691 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:59:53.111115 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:59:53.111388 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:59:53.111751 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:59:53.112013 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:59:53.116212 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:59:53.116467 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:59:53.125673 update_engine[1498]: I20250513 23:59:53.125372 1498 main.cc:92] Flatcar Update Engine starting May 13 23:59:53.125968 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 23:59:53.135706 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:59:53.147859 update_engine[1498]: I20250513 23:59:53.137660 1498 update_check_scheduler.cc:74] Next update check in 4m24s May 13 23:59:53.147923 jq[1509]: true May 13 23:59:53.152961 extend-filesystems[1503]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:59:53.152961 extend-filesystems[1503]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:59:53.152961 extend-filesystems[1503]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 23:59:53.157817 extend-filesystems[1484]: Resized filesystem in /dev/vda9 May 13 23:59:53.153423 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:59:53.153709 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:59:53.166010 systemd-logind[1492]: Watching system buttons on /dev/input/event1 (Power Button) May 13 23:59:53.166045 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 23:59:53.166931 systemd-logind[1492]: New seat seat0. May 13 23:59:53.170201 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:59:53.172028 tar[1508]: linux-amd64/helm May 13 23:59:53.178250 systemd[1]: Started update-engine.service - Update Engine. May 13 23:59:53.186428 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:59:53.186598 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:59:53.188895 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:59:53.189013 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:59:53.201020 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:59:53.225749 bash[1537]: Updated "/home/core/.ssh/authorized_keys" May 13 23:59:53.226307 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:59:53.229541 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:59:53.242934 locksmithd[1538]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:59:53.357191 containerd[1514]: time="2025-05-13T23:59:53.357002880Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 13 23:59:53.384009 containerd[1514]: time="2025-05-13T23:59:53.383935081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 23:59:53.386552 containerd[1514]: time="2025-05-13T23:59:53.386480924Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 23:59:53.386552 containerd[1514]: time="2025-05-13T23:59:53.386537897Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 23:59:53.386657 containerd[1514]: time="2025-05-13T23:59:53.386561542Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 23:59:53.386851 containerd[1514]: time="2025-05-13T23:59:53.386821127Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 23:59:53.386877 containerd[1514]: time="2025-05-13T23:59:53.386851102Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 23:59:53.386982 containerd[1514]: time="2025-05-13T23:59:53.386959798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:59:53.387015 containerd[1514]: time="2025-05-13T23:59:53.386981618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 23:59:53.387345 containerd[1514]: time="2025-05-13T23:59:53.387307956Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:59:53.387345 containerd[1514]: time="2025-05-13T23:59:53.387335006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 23:59:53.387397 containerd[1514]: time="2025-05-13T23:59:53.387354742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:59:53.387397 containerd[1514]: time="2025-05-13T23:59:53.387368450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 23:59:53.387518 containerd[1514]: time="2025-05-13T23:59:53.387490771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 23:59:53.387845 containerd[1514]: time="2025-05-13T23:59:53.387815295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 23:59:53.388071 containerd[1514]: time="2025-05-13T23:59:53.388032717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:59:53.388071 containerd[1514]: time="2025-05-13T23:59:53.388056392Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 23:59:53.388218 containerd[1514]: time="2025-05-13T23:59:53.388196299Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 23:59:53.388294 containerd[1514]: time="2025-05-13T23:59:53.388275617Z" level=info msg="metadata content store policy set" policy=shared May 13 23:59:53.395725 containerd[1514]: time="2025-05-13T23:59:53.395174123Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 23:59:53.395725 containerd[1514]: time="2025-05-13T23:59:53.395243505Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 23:59:53.395725 containerd[1514]: time="2025-05-13T23:59:53.395260442Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 23:59:53.395725 containerd[1514]: time="2025-05-13T23:59:53.395276027Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 23:59:53.395725 containerd[1514]: time="2025-05-13T23:59:53.395290302Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 23:59:53.395725 containerd[1514]: time="2025-05-13T23:59:53.395481008Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 23:59:53.395959 containerd[1514]: time="2025-05-13T23:59:53.395940544Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 23:59:53.396113 containerd[1514]: time="2025-05-13T23:59:53.396097314Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 23:59:53.396169 containerd[1514]: time="2025-05-13T23:59:53.396157076Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 23:59:53.396229 containerd[1514]: time="2025-05-13T23:59:53.396216019Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 23:59:53.396278 containerd[1514]: time="2025-05-13T23:59:53.396266190Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 23:59:53.396326 containerd[1514]: time="2025-05-13T23:59:53.396314852Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 23:59:53.396372 containerd[1514]: time="2025-05-13T23:59:53.396361574Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 23:59:53.396420 containerd[1514]: time="2025-05-13T23:59:53.396409021Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 23:59:53.396485 containerd[1514]: time="2025-05-13T23:59:53.396471884Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 23:59:53.396545 containerd[1514]: time="2025-05-13T23:59:53.396532064Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 23:59:53.396593 containerd[1514]: time="2025-05-13T23:59:53.396582298Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 23:59:53.396639 containerd[1514]: time="2025-05-13T23:59:53.396628937Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 23:59:53.396696 containerd[1514]: time="2025-05-13T23:59:53.396684851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 23:59:53.396762 containerd[1514]: time="2025-05-13T23:59:53.396750639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 23:59:53.396824 containerd[1514]: time="2025-05-13T23:59:53.396811951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 23:59:53.396887 containerd[1514]: time="2025-05-13T23:59:53.396875055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 23:59:53.396936 containerd[1514]: time="2025-05-13T23:59:53.396924357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 23:59:53.396984 containerd[1514]: time="2025-05-13T23:59:53.396972694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 23:59:53.397036 containerd[1514]: time="2025-05-13T23:59:53.397025810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 23:59:53.397098 containerd[1514]: time="2025-05-13T23:59:53.397085025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 23:59:53.397148 containerd[1514]: time="2025-05-13T23:59:53.397137188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 23:59:53.397198 containerd[1514]: time="2025-05-13T23:59:53.397187590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 23:59:53.397247 containerd[1514]: time="2025-05-13T23:59:53.397234795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 23:59:53.397294 containerd[1514]: time="2025-05-13T23:59:53.397283551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 23:59:53.397350 containerd[1514]: time="2025-05-13T23:59:53.397338197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 23:59:53.397400 containerd[1514]: time="2025-05-13T23:59:53.397389249Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 23:59:53.397469 containerd[1514]: time="2025-05-13T23:59:53.397456367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 23:59:53.397535 containerd[1514]: time="2025-05-13T23:59:53.397522941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 23:59:53.397584 containerd[1514]: time="2025-05-13T23:59:53.397572902Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 23:59:53.397690 containerd[1514]: time="2025-05-13T23:59:53.397673778Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 23:59:53.397784 containerd[1514]: time="2025-05-13T23:59:53.397768199Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 23:59:53.397831 containerd[1514]: time="2025-05-13T23:59:53.397820015Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 23:59:53.399320 containerd[1514]: time="2025-05-13T23:59:53.397879850Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 23:59:53.399320 containerd[1514]: time="2025-05-13T23:59:53.397894397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 23:59:53.399320 containerd[1514]: time="2025-05-13T23:59:53.397908357Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 23:59:53.399320 containerd[1514]: time="2025-05-13T23:59:53.397919425Z" level=info msg="NRI interface is disabled by configuration." May 13 23:59:53.399320 containerd[1514]: time="2025-05-13T23:59:53.397929528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 23:59:53.399441 containerd[1514]: time="2025-05-13T23:59:53.398198211Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 23:59:53.399441 containerd[1514]: time="2025-05-13T23:59:53.398238603Z" level=info msg="Connect containerd service" May 13 23:59:53.399441 containerd[1514]: time="2025-05-13T23:59:53.398270057Z" level=info msg="using legacy CRI server" May 13 23:59:53.399441 containerd[1514]: time="2025-05-13T23:59:53.398276533Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:59:53.399441 containerd[1514]: time="2025-05-13T23:59:53.398393560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 23:59:53.399441 containerd[1514]: time="2025-05-13T23:59:53.399020777Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:59:53.399441 containerd[1514]: time="2025-05-13T23:59:53.399212312Z" level=info msg="Start subscribing containerd event" May 13 23:59:53.399441 containerd[1514]: time="2025-05-13T23:59:53.399391385Z" level=info msg="Start recovering state" May 13 23:59:53.399738 containerd[1514]: time="2025-05-13T23:59:53.399468303Z" level=info msg="Start event monitor" May 13 23:59:53.399738 containerd[1514]: time="2025-05-13T23:59:53.399486550Z" level=info msg="Start snapshots syncer" May 13 23:59:53.399738 containerd[1514]: time="2025-05-13T23:59:53.399497324Z" level=info msg="Start cni network conf syncer for default" May 13 23:59:53.399738 containerd[1514]: time="2025-05-13T23:59:53.399504892Z" level=info msg="Start streaming server" May 13 23:59:53.399738 containerd[1514]: time="2025-05-13T23:59:53.399288811Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:59:53.399738 containerd[1514]: time="2025-05-13T23:59:53.399727900Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:59:53.400467 containerd[1514]: time="2025-05-13T23:59:53.400440492Z" level=info msg="containerd successfully booted in 0.045233s" May 13 23:59:53.400564 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:59:53.402999 sshd_keygen[1505]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:59:53.434934 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:59:53.444068 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:59:53.454919 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:59:53.455415 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:59:53.459001 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:59:53.486206 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:59:53.497116 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:59:53.499795 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 23:59:53.501728 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:59:53.562031 tar[1508]: linux-amd64/LICENSE May 13 23:59:53.562129 tar[1508]: linux-amd64/README.md May 13 23:59:53.577045 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:59:54.122525 systemd-networkd[1443]: eth0: Gained IPv6LL May 13 23:59:54.125932 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:59:54.128093 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:59:54.139070 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:59:54.141992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:54.144460 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:59:54.164902 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:59:54.165187 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:59:54.167647 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:59:54.172416 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:59:54.829803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:54.831587 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:59:54.832905 systemd[1]: Startup finished in 738ms (kernel) + 5.545s (initrd) + 4.602s (userspace) = 10.886s. May 13 23:59:54.835649 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:59:55.303696 kubelet[1596]: E0513 23:59:55.303550 1596 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:59:55.308092 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:59:55.308330 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:59:55.308871 systemd[1]: kubelet.service: Consumed 980ms CPU time, 243.5M memory peak. May 13 23:59:58.232160 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:59:58.233451 systemd[1]: Started sshd@0-10.0.0.99:22-10.0.0.1:44892.service - OpenSSH per-connection server daemon (10.0.0.1:44892). May 13 23:59:58.281619 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 44892 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 13 23:59:58.283567 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:58.295499 systemd-logind[1492]: New session 1 of user core. May 13 23:59:58.297058 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:59:58.307008 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:59:58.318549 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:59:58.336977 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:59:58.340145 (systemd)[1614]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:59:58.342718 systemd-logind[1492]: New session c1 of user core. May 13 23:59:58.492045 systemd[1614]: Queued start job for default target default.target. May 13 23:59:58.508330 systemd[1614]: Created slice app.slice - User Application Slice. May 13 23:59:58.508361 systemd[1614]: Reached target paths.target - Paths. May 13 23:59:58.508416 systemd[1614]: Reached target timers.target - Timers. May 13 23:59:58.510144 systemd[1614]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:59:58.521058 systemd[1614]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:59:58.521193 systemd[1614]: Reached target sockets.target - Sockets. May 13 23:59:58.521237 systemd[1614]: Reached target basic.target - Basic System. May 13 23:59:58.521279 systemd[1614]: Reached target default.target - Main User Target. May 13 23:59:58.521325 systemd[1614]: Startup finished in 171ms. May 13 23:59:58.521671 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:59:58.535577 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:59:58.607933 systemd[1]: Started sshd@1-10.0.0.99:22-10.0.0.1:44898.service - OpenSSH per-connection server daemon (10.0.0.1:44898). May 13 23:59:58.648054 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 44898 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 13 23:59:58.649754 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:58.654401 systemd-logind[1492]: New session 2 of user core. May 13 23:59:58.670928 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:59:58.735774 sshd[1627]: Connection closed by 10.0.0.1 port 44898 May 13 23:59:58.736226 sshd-session[1625]: pam_unix(sshd:session): session closed for user core May 13 23:59:58.752280 systemd[1]: sshd@1-10.0.0.99:22-10.0.0.1:44898.service: Deactivated successfully. May 13 23:59:58.754499 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:59:58.756266 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. May 13 23:59:58.766014 systemd[1]: Started sshd@2-10.0.0.99:22-10.0.0.1:44902.service - OpenSSH per-connection server daemon (10.0.0.1:44902). May 13 23:59:58.767086 systemd-logind[1492]: Removed session 2. May 13 23:59:58.801278 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 44902 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 13 23:59:58.802787 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:58.807964 systemd-logind[1492]: New session 3 of user core. May 13 23:59:58.817966 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:59:58.869052 sshd[1635]: Connection closed by 10.0.0.1 port 44902 May 13 23:59:58.869425 sshd-session[1632]: pam_unix(sshd:session): session closed for user core May 13 23:59:58.881593 systemd[1]: sshd@2-10.0.0.99:22-10.0.0.1:44902.service: Deactivated successfully. May 13 23:59:58.883328 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:59:58.885355 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. May 13 23:59:58.895931 systemd[1]: Started sshd@3-10.0.0.99:22-10.0.0.1:44910.service - OpenSSH per-connection server daemon (10.0.0.1:44910). May 13 23:59:58.896848 systemd-logind[1492]: Removed session 3. May 13 23:59:58.929279 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 44910 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 13 23:59:58.930868 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:58.936094 systemd-logind[1492]: New session 4 of user core. May 13 23:59:58.951990 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:59:59.007579 sshd[1643]: Connection closed by 10.0.0.1 port 44910 May 13 23:59:59.008187 sshd-session[1640]: pam_unix(sshd:session): session closed for user core May 13 23:59:59.026866 systemd[1]: sshd@3-10.0.0.99:22-10.0.0.1:44910.service: Deactivated successfully. May 13 23:59:59.028610 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:59:59.030285 systemd-logind[1492]: Session 4 logged out. Waiting for processes to exit. May 13 23:59:59.036001 systemd[1]: Started sshd@4-10.0.0.99:22-10.0.0.1:44924.service - OpenSSH per-connection server daemon (10.0.0.1:44924). May 13 23:59:59.037055 systemd-logind[1492]: Removed session 4. May 13 23:59:59.070425 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 44924 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 13 23:59:59.072095 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:59.076362 systemd-logind[1492]: New session 5 of user core. May 13 23:59:59.086815 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:59:59.147569 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:59:59.147939 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:59.164892 sudo[1652]: pam_unix(sudo:session): session closed for user root May 13 23:59:59.166983 sshd[1651]: Connection closed by 10.0.0.1 port 44924 May 13 23:59:59.167522 sshd-session[1648]: pam_unix(sshd:session): session closed for user core May 13 23:59:59.179901 systemd[1]: sshd@4-10.0.0.99:22-10.0.0.1:44924.service: Deactivated successfully. May 13 23:59:59.182048 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:59:59.183138 systemd-logind[1492]: Session 5 logged out. Waiting for processes to exit. May 13 23:59:59.198120 systemd[1]: Started sshd@5-10.0.0.99:22-10.0.0.1:44932.service - OpenSSH per-connection server daemon (10.0.0.1:44932). May 13 23:59:59.198960 systemd-logind[1492]: Removed session 5. May 13 23:59:59.235911 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 44932 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 13 23:59:59.237963 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:59.242528 systemd-logind[1492]: New session 6 of user core. May 13 23:59:59.252879 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:59:59.310911 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:59:59.311242 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:59.316310 sudo[1662]: pam_unix(sudo:session): session closed for user root May 13 23:59:59.324337 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:59:59.324682 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:59.344138 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:59:59.379371 augenrules[1684]: No rules May 13 23:59:59.381115 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:59:59.381390 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:59:59.382640 sudo[1661]: pam_unix(sudo:session): session closed for user root May 13 23:59:59.384165 sshd[1660]: Connection closed by 10.0.0.1 port 44932 May 13 23:59:59.384461 sshd-session[1657]: pam_unix(sshd:session): session closed for user core May 13 23:59:59.399452 systemd[1]: sshd@5-10.0.0.99:22-10.0.0.1:44932.service: Deactivated successfully. May 13 23:59:59.401180 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:59:59.402763 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. May 13 23:59:59.411922 systemd[1]: Started sshd@6-10.0.0.99:22-10.0.0.1:44934.service - OpenSSH per-connection server daemon (10.0.0.1:44934). May 13 23:59:59.413062 systemd-logind[1492]: Removed session 6. May 13 23:59:59.445146 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 44934 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 13 23:59:59.446653 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:59.450959 systemd-logind[1492]: New session 7 of user core. May 13 23:59:59.468814 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:59:59.523489 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:59:59.523828 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:59.881029 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:59:59.881442 (dockerd)[1715]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 00:00:00.171203 dockerd[1715]: time="2025-05-14T00:00:00.171070328Z" level=info msg="Starting up" May 14 00:00:00.189167 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 14 00:00:00.677186 systemd[1]: logrotate.service: Deactivated successfully. May 14 00:00:00.758723 dockerd[1715]: time="2025-05-14T00:00:00.758650917Z" level=info msg="Loading containers: start." May 14 00:00:00.974723 kernel: Initializing XFRM netlink socket May 14 00:00:01.069256 systemd-networkd[1443]: docker0: Link UP May 14 00:00:01.112929 dockerd[1715]: time="2025-05-14T00:00:01.112882452Z" level=info msg="Loading containers: done." May 14 00:00:01.128390 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2358300045-merged.mount: Deactivated successfully. May 14 00:00:01.130983 dockerd[1715]: time="2025-05-14T00:00:01.130929089Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 00:00:01.131062 dockerd[1715]: time="2025-05-14T00:00:01.131042701Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 14 00:00:01.131225 dockerd[1715]: time="2025-05-14T00:00:01.131191221Z" level=info msg="Daemon has completed initialization" May 14 00:00:01.172407 dockerd[1715]: time="2025-05-14T00:00:01.172335798Z" level=info msg="API listen on /run/docker.sock" May 14 00:00:01.172562 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 00:00:01.937822 containerd[1514]: time="2025-05-14T00:00:01.937765957Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 00:00:03.162628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount539629984.mount: Deactivated successfully. May 14 00:00:04.742968 containerd[1514]: time="2025-05-14T00:00:04.742884472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:04.744108 containerd[1514]: time="2025-05-14T00:00:04.744044163Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 14 00:00:04.746324 containerd[1514]: time="2025-05-14T00:00:04.746249951Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:04.750119 containerd[1514]: time="2025-05-14T00:00:04.750053342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:04.751364 containerd[1514]: time="2025-05-14T00:00:04.751314022Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.813501365s" May 14 00:00:04.751364 containerd[1514]: time="2025-05-14T00:00:04.751366601Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 14 00:00:04.777983 containerd[1514]: time="2025-05-14T00:00:04.777942022Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 00:00:05.320730 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 00:00:05.334911 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:05.497432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:05.502480 (kubelet)[1988]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:00:05.554814 kubelet[1988]: E0514 00:00:05.554752 1988 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:00:05.562624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:00:05.562880 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:00:05.563247 systemd[1]: kubelet.service: Consumed 223ms CPU time, 99.1M memory peak. May 14 00:00:07.480437 containerd[1514]: time="2025-05-14T00:00:07.480334637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:07.481961 containerd[1514]: time="2025-05-14T00:00:07.481900468Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 14 00:00:07.483972 containerd[1514]: time="2025-05-14T00:00:07.483899403Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:07.487376 containerd[1514]: time="2025-05-14T00:00:07.487332435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:07.488618 containerd[1514]: time="2025-05-14T00:00:07.488569371Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.710582742s" May 14 00:00:07.488703 containerd[1514]: time="2025-05-14T00:00:07.488617985Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 14 00:00:07.515071 containerd[1514]: time="2025-05-14T00:00:07.515025757Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 00:00:08.675998 containerd[1514]: time="2025-05-14T00:00:08.675911780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:08.676795 containerd[1514]: time="2025-05-14T00:00:08.676728209Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 14 00:00:08.677980 containerd[1514]: time="2025-05-14T00:00:08.677928448Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:08.680721 containerd[1514]: time="2025-05-14T00:00:08.680653608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:08.681783 containerd[1514]: time="2025-05-14T00:00:08.681735675Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.166665671s" May 14 00:00:08.681783 containerd[1514]: time="2025-05-14T00:00:08.681768913Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 14 00:00:08.705890 containerd[1514]: time="2025-05-14T00:00:08.705845491Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 00:00:10.087650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1302620713.mount: Deactivated successfully. May 14 00:00:10.823590 containerd[1514]: time="2025-05-14T00:00:10.823510122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:10.824559 containerd[1514]: time="2025-05-14T00:00:10.824527494Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 14 00:00:10.828522 containerd[1514]: time="2025-05-14T00:00:10.828485115Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:10.848205 containerd[1514]: time="2025-05-14T00:00:10.848124789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:10.849061 containerd[1514]: time="2025-05-14T00:00:10.849004311Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.142972053s" May 14 00:00:10.849122 containerd[1514]: time="2025-05-14T00:00:10.849065787Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 14 00:00:10.879258 containerd[1514]: time="2025-05-14T00:00:10.879185903Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 00:00:11.463127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2574642085.mount: Deactivated successfully. May 14 00:00:12.215200 containerd[1514]: time="2025-05-14T00:00:12.215098485Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 14 00:00:12.215639 containerd[1514]: time="2025-05-14T00:00:12.215123141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:12.216867 containerd[1514]: time="2025-05-14T00:00:12.216823156Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:12.224398 containerd[1514]: time="2025-05-14T00:00:12.224327209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:12.225656 containerd[1514]: time="2025-05-14T00:00:12.225613307Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.346371534s" May 14 00:00:12.225656 containerd[1514]: time="2025-05-14T00:00:12.225644649Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 00:00:12.249463 containerd[1514]: time="2025-05-14T00:00:12.249421098Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 00:00:12.867475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1598341176.mount: Deactivated successfully. May 14 00:00:12.875402 containerd[1514]: time="2025-05-14T00:00:12.875333794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:12.876354 containerd[1514]: time="2025-05-14T00:00:12.876296341Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 14 00:00:12.877577 containerd[1514]: time="2025-05-14T00:00:12.877528663Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:12.880045 containerd[1514]: time="2025-05-14T00:00:12.880000274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:12.880625 containerd[1514]: time="2025-05-14T00:00:12.880581643Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 631.119589ms" May 14 00:00:12.880625 containerd[1514]: time="2025-05-14T00:00:12.880608943Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 14 00:00:12.903901 containerd[1514]: time="2025-05-14T00:00:12.903842051Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 00:00:13.533155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3296052087.mount: Deactivated successfully. May 14 00:00:15.570558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 00:00:15.594004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:16.222461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:16.227476 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:00:16.511173 kubelet[2149]: E0514 00:00:16.511009 2149 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:00:16.515664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:00:16.515939 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:00:16.516350 systemd[1]: kubelet.service: Consumed 253ms CPU time, 96.3M memory peak. May 14 00:00:16.830688 containerd[1514]: time="2025-05-14T00:00:16.830604915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:16.831571 containerd[1514]: time="2025-05-14T00:00:16.831472489Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 14 00:00:16.832747 containerd[1514]: time="2025-05-14T00:00:16.832711944Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:16.836729 containerd[1514]: time="2025-05-14T00:00:16.836653451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:16.838292 containerd[1514]: time="2025-05-14T00:00:16.838247747Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.934359567s" May 14 00:00:16.838387 containerd[1514]: time="2025-05-14T00:00:16.838289475Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 14 00:00:19.497290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:19.497507 systemd[1]: kubelet.service: Consumed 253ms CPU time, 96.3M memory peak. May 14 00:00:19.508895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:19.528104 systemd[1]: Reload requested from client PID 2242 ('systemctl') (unit session-7.scope)... May 14 00:00:19.528122 systemd[1]: Reloading... May 14 00:00:19.625706 zram_generator::config[2286]: No configuration found. May 14 00:00:19.978541 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:00:20.088167 systemd[1]: Reloading finished in 559 ms. May 14 00:00:20.146223 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:20.151165 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:00:20.151488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:20.151545 systemd[1]: kubelet.service: Consumed 155ms CPU time, 83.5M memory peak. May 14 00:00:20.161939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:20.311398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:20.316402 (kubelet)[2336]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:00:20.368067 kubelet[2336]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:20.368067 kubelet[2336]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:00:20.368067 kubelet[2336]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:20.368476 kubelet[2336]: I0514 00:00:20.368101 2336 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:00:20.602532 kubelet[2336]: I0514 00:00:20.602436 2336 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:00:20.602532 kubelet[2336]: I0514 00:00:20.602513 2336 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:00:20.602784 kubelet[2336]: I0514 00:00:20.602760 2336 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:00:20.669579 kubelet[2336]: I0514 00:00:20.669514 2336 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:00:20.674908 kubelet[2336]: E0514 00:00:20.674878 2336 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:20.725121 kubelet[2336]: I0514 00:00:20.724782 2336 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:00:20.728792 kubelet[2336]: I0514 00:00:20.728470 2336 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:00:20.729023 kubelet[2336]: I0514 00:00:20.728768 2336 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:00:20.729999 kubelet[2336]: I0514 00:00:20.729958 2336 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:00:20.729999 kubelet[2336]: I0514 00:00:20.729984 2336 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:00:20.730173 kubelet[2336]: I0514 00:00:20.730139 2336 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:20.731274 kubelet[2336]: I0514 00:00:20.731225 2336 kubelet.go:400] "Attempting to sync node with API server" May 14 00:00:20.731274 kubelet[2336]: I0514 00:00:20.731249 2336 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:00:20.731274 kubelet[2336]: I0514 00:00:20.731272 2336 kubelet.go:312] "Adding apiserver pod source" May 14 00:00:20.731375 kubelet[2336]: I0514 00:00:20.731283 2336 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:00:20.731826 kubelet[2336]: W0514 00:00:20.731740 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:20.731826 kubelet[2336]: E0514 00:00:20.731792 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:20.731994 kubelet[2336]: W0514 00:00:20.731865 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:20.731994 kubelet[2336]: E0514 00:00:20.731892 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:20.735905 kubelet[2336]: I0514 00:00:20.735861 2336 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 00:00:20.737736 kubelet[2336]: I0514 00:00:20.737664 2336 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:00:20.737880 kubelet[2336]: W0514 00:00:20.737796 2336 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:00:20.738989 kubelet[2336]: I0514 00:00:20.738589 2336 server.go:1264] "Started kubelet" May 14 00:00:20.740033 kubelet[2336]: I0514 00:00:20.739786 2336 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:00:20.740713 kubelet[2336]: I0514 00:00:20.740186 2336 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:00:20.740713 kubelet[2336]: I0514 00:00:20.740378 2336 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:00:20.740713 kubelet[2336]: I0514 00:00:20.740425 2336 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:00:20.741855 kubelet[2336]: I0514 00:00:20.741826 2336 server.go:455] "Adding debug handlers to kubelet server" May 14 00:00:20.743991 kubelet[2336]: E0514 00:00:20.743958 2336 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 00:00:20.744038 kubelet[2336]: I0514 00:00:20.744006 2336 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:00:20.744125 kubelet[2336]: I0514 00:00:20.744097 2336 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:00:20.744176 kubelet[2336]: I0514 00:00:20.744160 2336 reconciler.go:26] "Reconciler: start to sync state" May 14 00:00:20.745085 kubelet[2336]: W0514 00:00:20.744524 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:20.745085 kubelet[2336]: E0514 00:00:20.744579 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:20.745085 kubelet[2336]: E0514 00:00:20.744823 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="200ms" May 14 00:00:20.745405 kubelet[2336]: E0514 00:00:20.745258 2336 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3bb3127f43e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:00:20.738556905 +0000 UTC m=+0.418410075,LastTimestamp:2025-05-14 00:00:20.738556905 +0000 UTC m=+0.418410075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:00:20.746448 kubelet[2336]: E0514 00:00:20.746410 2336 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:00:20.746504 kubelet[2336]: I0514 00:00:20.746470 2336 factory.go:221] Registration of the containerd container factory successfully May 14 00:00:20.746504 kubelet[2336]: I0514 00:00:20.746487 2336 factory.go:221] Registration of the systemd container factory successfully May 14 00:00:20.746607 kubelet[2336]: I0514 00:00:20.746583 2336 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:00:20.758377 kubelet[2336]: I0514 00:00:20.758220 2336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:00:20.759946 kubelet[2336]: I0514 00:00:20.759928 2336 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:00:20.760093 kubelet[2336]: I0514 00:00:20.760080 2336 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:00:20.760610 kubelet[2336]: I0514 00:00:20.760596 2336 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:00:20.761968 kubelet[2336]: W0514 00:00:20.761911 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:20.762031 kubelet[2336]: E0514 00:00:20.761978 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:20.762323 kubelet[2336]: E0514 00:00:20.762108 2336 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:00:20.763506 kubelet[2336]: I0514 00:00:20.763491 2336 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:00:20.763602 kubelet[2336]: I0514 00:00:20.763589 2336 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:00:20.763702 kubelet[2336]: I0514 00:00:20.763692 2336 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:20.847463 kubelet[2336]: I0514 00:00:20.847422 2336 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:00:20.848070 kubelet[2336]: E0514 00:00:20.848043 2336 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" May 14 00:00:20.862882 kubelet[2336]: E0514 00:00:20.862787 2336 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 00:00:20.945260 kubelet[2336]: E0514 00:00:20.945205 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="400ms" May 14 00:00:21.050135 kubelet[2336]: I0514 00:00:21.050078 2336 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:00:21.050414 kubelet[2336]: E0514 00:00:21.050373 2336 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" May 14 00:00:21.063482 kubelet[2336]: E0514 00:00:21.063461 2336 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 00:00:21.346053 kubelet[2336]: E0514 00:00:21.345989 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="800ms" May 14 00:00:21.452009 kubelet[2336]: I0514 00:00:21.451954 2336 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:00:21.452495 kubelet[2336]: E0514 00:00:21.452297 2336 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" May 14 00:00:21.464478 kubelet[2336]: E0514 00:00:21.464441 2336 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 00:00:21.547357 kubelet[2336]: W0514 00:00:21.547268 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:21.547357 kubelet[2336]: E0514 00:00:21.547334 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:21.597003 kubelet[2336]: W0514 00:00:21.596812 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:21.597003 kubelet[2336]: E0514 00:00:21.596884 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:21.830434 kubelet[2336]: W0514 00:00:21.830323 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:21.830434 kubelet[2336]: E0514 00:00:21.830389 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:21.831469 kubelet[2336]: I0514 00:00:21.831431 2336 policy_none.go:49] "None policy: Start" May 14 00:00:21.832202 kubelet[2336]: I0514 00:00:21.832165 2336 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:00:21.832202 kubelet[2336]: I0514 00:00:21.832199 2336 state_mem.go:35] "Initializing new in-memory state store" May 14 00:00:21.892874 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 00:00:21.906750 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 00:00:21.910211 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 00:00:21.922870 kubelet[2336]: I0514 00:00:21.922768 2336 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:00:21.923117 kubelet[2336]: I0514 00:00:21.923060 2336 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:00:21.923468 kubelet[2336]: I0514 00:00:21.923195 2336 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:00:21.924278 kubelet[2336]: E0514 00:00:21.924246 2336 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 00:00:22.049359 kubelet[2336]: W0514 00:00:22.049256 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:22.049359 kubelet[2336]: E0514 00:00:22.049341 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:22.147437 kubelet[2336]: E0514 00:00:22.147282 2336 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="1.6s" May 14 00:00:22.254608 kubelet[2336]: I0514 00:00:22.254563 2336 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:00:22.255059 kubelet[2336]: E0514 00:00:22.255008 2336 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" May 14 00:00:22.265207 kubelet[2336]: I0514 00:00:22.265133 2336 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 00:00:22.266493 kubelet[2336]: I0514 00:00:22.266469 2336 topology_manager.go:215] "Topology Admit Handler" podUID="53ee06350b31eda269b09d404e2ef464" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 00:00:22.267315 kubelet[2336]: I0514 00:00:22.267276 2336 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 00:00:22.273393 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 14 00:00:22.286904 systemd[1]: Created slice kubepods-burstable-pod53ee06350b31eda269b09d404e2ef464.slice - libcontainer container kubepods-burstable-pod53ee06350b31eda269b09d404e2ef464.slice. May 14 00:00:22.297512 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 14 00:00:22.355805 kubelet[2336]: I0514 00:00:22.355734 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53ee06350b31eda269b09d404e2ef464-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"53ee06350b31eda269b09d404e2ef464\") " pod="kube-system/kube-apiserver-localhost" May 14 00:00:22.355805 kubelet[2336]: I0514 00:00:22.355789 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:22.355805 kubelet[2336]: I0514 00:00:22.355815 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:22.356001 kubelet[2336]: I0514 00:00:22.355835 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:22.356001 kubelet[2336]: I0514 00:00:22.355858 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:22.356001 kubelet[2336]: I0514 00:00:22.355911 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 00:00:22.356001 kubelet[2336]: I0514 00:00:22.355950 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53ee06350b31eda269b09d404e2ef464-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"53ee06350b31eda269b09d404e2ef464\") " pod="kube-system/kube-apiserver-localhost" May 14 00:00:22.356001 kubelet[2336]: I0514 00:00:22.355972 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:22.356144 kubelet[2336]: I0514 00:00:22.355989 2336 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53ee06350b31eda269b09d404e2ef464-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"53ee06350b31eda269b09d404e2ef464\") " pod="kube-system/kube-apiserver-localhost" May 14 00:00:22.584909 kubelet[2336]: E0514 00:00:22.584857 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:22.585492 containerd[1514]: time="2025-05-14T00:00:22.585462982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 14 00:00:22.595851 kubelet[2336]: E0514 00:00:22.595801 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:22.596353 containerd[1514]: time="2025-05-14T00:00:22.596309170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:53ee06350b31eda269b09d404e2ef464,Namespace:kube-system,Attempt:0,}" May 14 00:00:22.600588 kubelet[2336]: E0514 00:00:22.600554 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:22.600963 containerd[1514]: time="2025-05-14T00:00:22.600926330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 14 00:00:22.689703 kubelet[2336]: E0514 00:00:22.689591 2336 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:22.750129 kubelet[2336]: E0514 00:00:22.749994 2336 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3bb3127f43e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 00:00:20.738556905 +0000 UTC m=+0.418410075,LastTimestamp:2025-05-14 00:00:20.738556905 +0000 UTC m=+0.418410075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 00:00:23.175915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount462006102.mount: Deactivated successfully. May 14 00:00:23.183489 containerd[1514]: time="2025-05-14T00:00:23.183444894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:23.187057 containerd[1514]: time="2025-05-14T00:00:23.187021230Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 14 00:00:23.188431 containerd[1514]: time="2025-05-14T00:00:23.188404703Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:23.190466 containerd[1514]: time="2025-05-14T00:00:23.190426570Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:23.191444 containerd[1514]: time="2025-05-14T00:00:23.191422402Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 00:00:23.192226 containerd[1514]: time="2025-05-14T00:00:23.192182394Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:23.193142 containerd[1514]: time="2025-05-14T00:00:23.193112819Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 00:00:23.193995 containerd[1514]: time="2025-05-14T00:00:23.193956248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:23.194786 containerd[1514]: time="2025-05-14T00:00:23.194749932Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 609.194779ms" May 14 00:00:23.198149 containerd[1514]: time="2025-05-14T00:00:23.198102338Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 601.688974ms" May 14 00:00:23.198903 containerd[1514]: time="2025-05-14T00:00:23.198871194Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 597.857879ms" May 14 00:00:23.216377 kubelet[2336]: W0514 00:00:23.216332 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:23.216377 kubelet[2336]: E0514 00:00:23.216369 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:23.331493 containerd[1514]: time="2025-05-14T00:00:23.331065970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:00:23.331493 containerd[1514]: time="2025-05-14T00:00:23.331205650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:00:23.331493 containerd[1514]: time="2025-05-14T00:00:23.331223679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:00:23.331493 containerd[1514]: time="2025-05-14T00:00:23.331401281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:00:23.331493 containerd[1514]: time="2025-05-14T00:00:23.330281543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:00:23.331493 containerd[1514]: time="2025-05-14T00:00:23.330349035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:00:23.331493 containerd[1514]: time="2025-05-14T00:00:23.330363334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:00:23.331493 containerd[1514]: time="2025-05-14T00:00:23.330446791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:00:23.332084 containerd[1514]: time="2025-05-14T00:00:23.330060532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:00:23.332084 containerd[1514]: time="2025-05-14T00:00:23.331632217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:00:23.332084 containerd[1514]: time="2025-05-14T00:00:23.331807583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:00:23.332894 containerd[1514]: time="2025-05-14T00:00:23.332859678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:00:23.360863 systemd[1]: Started cri-containerd-1372b456ec39f71489fc5a3ed81a37ff0e3a9d8d854606ad45d5f42434a11d6f.scope - libcontainer container 1372b456ec39f71489fc5a3ed81a37ff0e3a9d8d854606ad45d5f42434a11d6f. May 14 00:00:23.362690 systemd[1]: Started cri-containerd-a27e986890039ab0590124cb3d2e67203891bf7deaa314fda8ddf8e11e51f9e6.scope - libcontainer container a27e986890039ab0590124cb3d2e67203891bf7deaa314fda8ddf8e11e51f9e6. May 14 00:00:23.365210 systemd[1]: Started cri-containerd-f5b3b3bdf7283867522cb4253518c3a4ab20d13d7219e9ad62d397fe10ded8d8.scope - libcontainer container f5b3b3bdf7283867522cb4253518c3a4ab20d13d7219e9ad62d397fe10ded8d8. May 14 00:00:23.405357 containerd[1514]: time="2025-05-14T00:00:23.405317155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5b3b3bdf7283867522cb4253518c3a4ab20d13d7219e9ad62d397fe10ded8d8\"" May 14 00:00:23.406213 containerd[1514]: time="2025-05-14T00:00:23.405825765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:53ee06350b31eda269b09d404e2ef464,Namespace:kube-system,Attempt:0,} returns sandbox id \"1372b456ec39f71489fc5a3ed81a37ff0e3a9d8d854606ad45d5f42434a11d6f\"" May 14 00:00:23.406447 kubelet[2336]: E0514 00:00:23.406427 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:23.408058 kubelet[2336]: E0514 00:00:23.408031 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:23.408945 containerd[1514]: time="2025-05-14T00:00:23.408833938Z" level=info msg="CreateContainer within sandbox \"f5b3b3bdf7283867522cb4253518c3a4ab20d13d7219e9ad62d397fe10ded8d8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:00:23.411245 containerd[1514]: time="2025-05-14T00:00:23.411193912Z" level=info msg="CreateContainer within sandbox \"1372b456ec39f71489fc5a3ed81a37ff0e3a9d8d854606ad45d5f42434a11d6f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:00:23.412654 containerd[1514]: time="2025-05-14T00:00:23.412626630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"a27e986890039ab0590124cb3d2e67203891bf7deaa314fda8ddf8e11e51f9e6\"" May 14 00:00:23.413545 kubelet[2336]: E0514 00:00:23.413489 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:23.416413 containerd[1514]: time="2025-05-14T00:00:23.416378720Z" level=info msg="CreateContainer within sandbox \"a27e986890039ab0590124cb3d2e67203891bf7deaa314fda8ddf8e11e51f9e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:00:23.440924 containerd[1514]: time="2025-05-14T00:00:23.440820595Z" level=info msg="CreateContainer within sandbox \"f5b3b3bdf7283867522cb4253518c3a4ab20d13d7219e9ad62d397fe10ded8d8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6a700276efb815416190dd4952cf5babd80dc06e3b775d3e03b1f2d9613891af\"" May 14 00:00:23.441421 containerd[1514]: time="2025-05-14T00:00:23.441362876Z" level=info msg="StartContainer for \"6a700276efb815416190dd4952cf5babd80dc06e3b775d3e03b1f2d9613891af\"" May 14 00:00:23.444556 containerd[1514]: time="2025-05-14T00:00:23.444532488Z" level=info msg="CreateContainer within sandbox \"1372b456ec39f71489fc5a3ed81a37ff0e3a9d8d854606ad45d5f42434a11d6f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a040bca3acb810a229395f75b5167f09303e5090c2f0fee9488412d745c1bc8c\"" May 14 00:00:23.444985 containerd[1514]: time="2025-05-14T00:00:23.444963176Z" level=info msg="StartContainer for \"a040bca3acb810a229395f75b5167f09303e5090c2f0fee9488412d745c1bc8c\"" May 14 00:00:23.449166 containerd[1514]: time="2025-05-14T00:00:23.449096181Z" level=info msg="CreateContainer within sandbox \"a27e986890039ab0590124cb3d2e67203891bf7deaa314fda8ddf8e11e51f9e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c118dffeff408076098d0d5f65c415162358f4e8adcc6593ee37301b9fd00285\"" May 14 00:00:23.449586 containerd[1514]: time="2025-05-14T00:00:23.449569375Z" level=info msg="StartContainer for \"c118dffeff408076098d0d5f65c415162358f4e8adcc6593ee37301b9fd00285\"" May 14 00:00:23.470838 systemd[1]: Started cri-containerd-6a700276efb815416190dd4952cf5babd80dc06e3b775d3e03b1f2d9613891af.scope - libcontainer container 6a700276efb815416190dd4952cf5babd80dc06e3b775d3e03b1f2d9613891af. May 14 00:00:23.475423 systemd[1]: Started cri-containerd-a040bca3acb810a229395f75b5167f09303e5090c2f0fee9488412d745c1bc8c.scope - libcontainer container a040bca3acb810a229395f75b5167f09303e5090c2f0fee9488412d745c1bc8c. May 14 00:00:23.476946 systemd[1]: Started cri-containerd-c118dffeff408076098d0d5f65c415162358f4e8adcc6593ee37301b9fd00285.scope - libcontainer container c118dffeff408076098d0d5f65c415162358f4e8adcc6593ee37301b9fd00285. May 14 00:00:23.524166 containerd[1514]: time="2025-05-14T00:00:23.524104018Z" level=info msg="StartContainer for \"6a700276efb815416190dd4952cf5babd80dc06e3b775d3e03b1f2d9613891af\" returns successfully" May 14 00:00:23.524499 containerd[1514]: time="2025-05-14T00:00:23.524424899Z" level=info msg="StartContainer for \"a040bca3acb810a229395f75b5167f09303e5090c2f0fee9488412d745c1bc8c\" returns successfully" May 14 00:00:23.531114 containerd[1514]: time="2025-05-14T00:00:23.531062209Z" level=info msg="StartContainer for \"c118dffeff408076098d0d5f65c415162358f4e8adcc6593ee37301b9fd00285\" returns successfully" May 14 00:00:23.574222 kubelet[2336]: W0514 00:00:23.573371 2336 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:23.574222 kubelet[2336]: E0514 00:00:23.573424 2336 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused May 14 00:00:23.778656 kubelet[2336]: E0514 00:00:23.778501 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:23.784283 kubelet[2336]: E0514 00:00:23.783579 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:23.784283 kubelet[2336]: E0514 00:00:23.784042 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:23.856425 kubelet[2336]: I0514 00:00:23.856375 2336 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:00:24.784703 kubelet[2336]: E0514 00:00:24.784620 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:24.785522 kubelet[2336]: E0514 00:00:24.785496 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:25.112055 kubelet[2336]: E0514 00:00:25.111862 2336 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 00:00:25.129721 kubelet[2336]: I0514 00:00:25.129659 2336 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 00:00:25.737574 kubelet[2336]: I0514 00:00:25.737528 2336 apiserver.go:52] "Watching apiserver" May 14 00:00:25.744660 kubelet[2336]: I0514 00:00:25.744638 2336 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:00:25.796170 kubelet[2336]: E0514 00:00:25.796119 2336 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 14 00:00:25.796870 kubelet[2336]: E0514 00:00:25.796578 2336 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:27.362318 systemd[1]: Reload requested from client PID 2618 ('systemctl') (unit session-7.scope)... May 14 00:00:27.362338 systemd[1]: Reloading... May 14 00:00:27.485721 zram_generator::config[2671]: No configuration found. May 14 00:00:27.672143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:00:27.796301 systemd[1]: Reloading finished in 433 ms. May 14 00:00:27.827741 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:27.844419 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:00:27.844785 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:27.844852 systemd[1]: kubelet.service: Consumed 803ms CPU time, 119.2M memory peak. May 14 00:00:27.857115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:28.031787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:28.047224 (kubelet)[2707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:00:28.099006 kubelet[2707]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:28.099006 kubelet[2707]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:00:28.099006 kubelet[2707]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:28.099453 kubelet[2707]: I0514 00:00:28.099083 2707 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:00:28.104648 kubelet[2707]: I0514 00:00:28.104602 2707 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:00:28.104648 kubelet[2707]: I0514 00:00:28.104629 2707 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:00:28.104903 kubelet[2707]: I0514 00:00:28.104882 2707 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:00:28.106155 kubelet[2707]: I0514 00:00:28.106133 2707 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:00:28.107483 kubelet[2707]: I0514 00:00:28.107452 2707 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:00:28.117305 kubelet[2707]: I0514 00:00:28.117267 2707 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:00:28.117596 kubelet[2707]: I0514 00:00:28.117556 2707 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:00:28.117803 kubelet[2707]: I0514 00:00:28.117589 2707 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:00:28.117935 kubelet[2707]: I0514 00:00:28.117814 2707 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:00:28.117935 kubelet[2707]: I0514 00:00:28.117827 2707 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:00:28.117935 kubelet[2707]: I0514 00:00:28.117874 2707 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:28.118028 kubelet[2707]: I0514 00:00:28.117991 2707 kubelet.go:400] "Attempting to sync node with API server" May 14 00:00:28.118028 kubelet[2707]: I0514 00:00:28.118005 2707 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:00:28.118091 kubelet[2707]: I0514 00:00:28.118037 2707 kubelet.go:312] "Adding apiserver pod source" May 14 00:00:28.118091 kubelet[2707]: I0514 00:00:28.118061 2707 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:00:28.123456 kubelet[2707]: I0514 00:00:28.122992 2707 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 00:00:28.123456 kubelet[2707]: I0514 00:00:28.123242 2707 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:00:28.123778 kubelet[2707]: I0514 00:00:28.123751 2707 server.go:1264] "Started kubelet" May 14 00:00:28.125896 kubelet[2707]: I0514 00:00:28.125699 2707 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:00:28.125896 kubelet[2707]: I0514 00:00:28.125740 2707 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:00:28.126443 kubelet[2707]: I0514 00:00:28.126422 2707 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:00:28.126827 kubelet[2707]: I0514 00:00:28.126804 2707 server.go:455] "Adding debug handlers to kubelet server" May 14 00:00:28.127196 kubelet[2707]: E0514 00:00:28.127176 2707 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:00:28.127432 kubelet[2707]: I0514 00:00:28.127385 2707 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:00:28.128540 kubelet[2707]: I0514 00:00:28.128224 2707 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:00:28.128540 kubelet[2707]: I0514 00:00:28.128335 2707 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:00:28.128540 kubelet[2707]: I0514 00:00:28.128493 2707 reconciler.go:26] "Reconciler: start to sync state" May 14 00:00:28.135294 kubelet[2707]: I0514 00:00:28.135264 2707 factory.go:221] Registration of the containerd container factory successfully May 14 00:00:28.135852 kubelet[2707]: I0514 00:00:28.135817 2707 factory.go:221] Registration of the systemd container factory successfully May 14 00:00:28.136275 kubelet[2707]: I0514 00:00:28.136242 2707 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:00:28.144203 kubelet[2707]: I0514 00:00:28.144149 2707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:00:28.145972 kubelet[2707]: I0514 00:00:28.145938 2707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:00:28.145972 kubelet[2707]: I0514 00:00:28.145971 2707 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:00:28.146048 kubelet[2707]: I0514 00:00:28.145999 2707 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:00:28.146085 kubelet[2707]: E0514 00:00:28.146065 2707 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:00:28.209491 kubelet[2707]: I0514 00:00:28.209435 2707 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:00:28.209491 kubelet[2707]: I0514 00:00:28.209474 2707 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:00:28.209491 kubelet[2707]: I0514 00:00:28.209502 2707 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:28.209847 kubelet[2707]: I0514 00:00:28.209826 2707 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:00:28.209886 kubelet[2707]: I0514 00:00:28.209843 2707 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:00:28.209886 kubelet[2707]: I0514 00:00:28.209866 2707 policy_none.go:49] "None policy: Start" May 14 00:00:28.210713 kubelet[2707]: I0514 00:00:28.210661 2707 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:00:28.210713 kubelet[2707]: I0514 00:00:28.210724 2707 state_mem.go:35] "Initializing new in-memory state store" May 14 00:00:28.210968 kubelet[2707]: I0514 00:00:28.210947 2707 state_mem.go:75] "Updated machine memory state" May 14 00:00:28.216913 kubelet[2707]: I0514 00:00:28.216428 2707 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:00:28.216913 kubelet[2707]: I0514 00:00:28.216639 2707 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:00:28.216913 kubelet[2707]: I0514 00:00:28.216777 2707 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:00:28.233961 kubelet[2707]: I0514 00:00:28.233900 2707 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 00:00:28.246794 kubelet[2707]: I0514 00:00:28.246701 2707 topology_manager.go:215] "Topology Admit Handler" podUID="53ee06350b31eda269b09d404e2ef464" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 00:00:28.246976 kubelet[2707]: I0514 00:00:28.246848 2707 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 00:00:28.246976 kubelet[2707]: I0514 00:00:28.246924 2707 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 00:00:28.329035 kubelet[2707]: I0514 00:00:28.328957 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53ee06350b31eda269b09d404e2ef464-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"53ee06350b31eda269b09d404e2ef464\") " pod="kube-system/kube-apiserver-localhost" May 14 00:00:28.429483 kubelet[2707]: I0514 00:00:28.429179 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53ee06350b31eda269b09d404e2ef464-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"53ee06350b31eda269b09d404e2ef464\") " pod="kube-system/kube-apiserver-localhost" May 14 00:00:28.429483 kubelet[2707]: I0514 00:00:28.429264 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:28.429483 kubelet[2707]: I0514 00:00:28.429292 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:28.429483 kubelet[2707]: I0514 00:00:28.429308 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:28.429483 kubelet[2707]: I0514 00:00:28.429327 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:28.429775 kubelet[2707]: I0514 00:00:28.429489 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 00:00:28.429775 kubelet[2707]: I0514 00:00:28.429567 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 00:00:28.429775 kubelet[2707]: I0514 00:00:28.429596 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53ee06350b31eda269b09d404e2ef464-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"53ee06350b31eda269b09d404e2ef464\") " pod="kube-system/kube-apiserver-localhost" May 14 00:00:28.506275 kubelet[2707]: I0514 00:00:28.506238 2707 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 14 00:00:28.506411 kubelet[2707]: I0514 00:00:28.506324 2707 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 00:00:28.750161 kubelet[2707]: E0514 00:00:28.749932 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:28.750161 kubelet[2707]: E0514 00:00:28.750020 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:28.750161 kubelet[2707]: E0514 00:00:28.750031 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:29.118561 kubelet[2707]: I0514 00:00:29.118472 2707 apiserver.go:52] "Watching apiserver" May 14 00:00:29.129328 kubelet[2707]: I0514 00:00:29.129276 2707 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:00:29.192579 kubelet[2707]: E0514 00:00:29.192103 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:29.271007 kubelet[2707]: E0514 00:00:29.270940 2707 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 00:00:29.271007 kubelet[2707]: E0514 00:00:29.270966 2707 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 14 00:00:29.271461 kubelet[2707]: E0514 00:00:29.271438 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:29.271516 kubelet[2707]: E0514 00:00:29.271463 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:29.280552 kubelet[2707]: I0514 00:00:29.279395 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.279369944 podStartE2EDuration="1.279369944s" podCreationTimestamp="2025-05-14 00:00:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:29.200878289 +0000 UTC m=+1.145109194" watchObservedRunningTime="2025-05-14 00:00:29.279369944 +0000 UTC m=+1.223600849" May 14 00:00:29.280552 kubelet[2707]: I0514 00:00:29.279566 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.279560254 podStartE2EDuration="1.279560254s" podCreationTimestamp="2025-05-14 00:00:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:29.279272352 +0000 UTC m=+1.223503257" watchObservedRunningTime="2025-05-14 00:00:29.279560254 +0000 UTC m=+1.223791159" May 14 00:00:29.291817 kubelet[2707]: I0514 00:00:29.291738 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.291717116 podStartE2EDuration="1.291717116s" podCreationTimestamp="2025-05-14 00:00:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:29.290767833 +0000 UTC m=+1.234998748" watchObservedRunningTime="2025-05-14 00:00:29.291717116 +0000 UTC m=+1.235948021" May 14 00:00:30.197208 kubelet[2707]: E0514 00:00:30.197162 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:30.197615 kubelet[2707]: E0514 00:00:30.197255 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:30.262730 kubelet[2707]: E0514 00:00:30.262659 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:31.195497 kubelet[2707]: E0514 00:00:31.195467 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:35.690052 sudo[1696]: pam_unix(sudo:session): session closed for user root May 14 00:00:35.692198 sshd[1695]: Connection closed by 10.0.0.1 port 44934 May 14 00:00:35.693083 sshd-session[1692]: pam_unix(sshd:session): session closed for user core May 14 00:00:35.697974 systemd[1]: sshd@6-10.0.0.99:22-10.0.0.1:44934.service: Deactivated successfully. May 14 00:00:35.700086 systemd[1]: session-7.scope: Deactivated successfully. May 14 00:00:35.700299 systemd[1]: session-7.scope: Consumed 4.960s CPU time, 241.5M memory peak. May 14 00:00:35.701634 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. May 14 00:00:35.702758 systemd-logind[1492]: Removed session 7. May 14 00:00:37.949636 update_engine[1498]: I20250514 00:00:37.949538 1498 update_attempter.cc:509] Updating boot flags... May 14 00:00:37.997717 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2804) May 14 00:00:38.054091 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2804) May 14 00:00:38.087700 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2804) May 14 00:00:38.792332 kubelet[2707]: E0514 00:00:38.791883 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:39.909903 kubelet[2707]: E0514 00:00:39.909866 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:40.266739 kubelet[2707]: E0514 00:00:40.266365 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:44.011133 kubelet[2707]: I0514 00:00:44.011096 2707 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:00:44.011596 kubelet[2707]: I0514 00:00:44.011510 2707 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:00:44.011624 containerd[1514]: time="2025-05-14T00:00:44.011359755Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:00:44.662375 kubelet[2707]: I0514 00:00:44.662314 2707 topology_manager.go:215] "Topology Admit Handler" podUID="607c88f0-0a8d-44f7-a2c3-78c6d3023649" podNamespace="kube-system" podName="kube-proxy-wtdj8" May 14 00:00:44.670927 systemd[1]: Created slice kubepods-besteffort-pod607c88f0_0a8d_44f7_a2c3_78c6d3023649.slice - libcontainer container kubepods-besteffort-pod607c88f0_0a8d_44f7_a2c3_78c6d3023649.slice. May 14 00:00:44.727755 kubelet[2707]: I0514 00:00:44.727690 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/607c88f0-0a8d-44f7-a2c3-78c6d3023649-kube-proxy\") pod \"kube-proxy-wtdj8\" (UID: \"607c88f0-0a8d-44f7-a2c3-78c6d3023649\") " pod="kube-system/kube-proxy-wtdj8" May 14 00:00:44.727922 kubelet[2707]: I0514 00:00:44.727768 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/607c88f0-0a8d-44f7-a2c3-78c6d3023649-xtables-lock\") pod \"kube-proxy-wtdj8\" (UID: \"607c88f0-0a8d-44f7-a2c3-78c6d3023649\") " pod="kube-system/kube-proxy-wtdj8" May 14 00:00:44.727922 kubelet[2707]: I0514 00:00:44.727809 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/607c88f0-0a8d-44f7-a2c3-78c6d3023649-lib-modules\") pod \"kube-proxy-wtdj8\" (UID: \"607c88f0-0a8d-44f7-a2c3-78c6d3023649\") " pod="kube-system/kube-proxy-wtdj8" May 14 00:00:44.727922 kubelet[2707]: I0514 00:00:44.727829 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkm99\" (UniqueName: \"kubernetes.io/projected/607c88f0-0a8d-44f7-a2c3-78c6d3023649-kube-api-access-hkm99\") pod \"kube-proxy-wtdj8\" (UID: \"607c88f0-0a8d-44f7-a2c3-78c6d3023649\") " pod="kube-system/kube-proxy-wtdj8" May 14 00:00:44.985059 kubelet[2707]: E0514 00:00:44.984922 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:44.985637 containerd[1514]: time="2025-05-14T00:00:44.985583657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wtdj8,Uid:607c88f0-0a8d-44f7-a2c3-78c6d3023649,Namespace:kube-system,Attempt:0,}" May 14 00:00:45.014489 containerd[1514]: time="2025-05-14T00:00:45.013858090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:00:45.014489 containerd[1514]: time="2025-05-14T00:00:45.014442305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:00:45.014489 containerd[1514]: time="2025-05-14T00:00:45.014460627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:00:45.015029 containerd[1514]: time="2025-05-14T00:00:45.014547450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:00:45.040861 systemd[1]: Started cri-containerd-e9cd1d38c9b59af7a97b3868161361ef70e47575a96a55ae032171bf4ca67a88.scope - libcontainer container e9cd1d38c9b59af7a97b3868161361ef70e47575a96a55ae032171bf4ca67a88. May 14 00:00:45.064878 containerd[1514]: time="2025-05-14T00:00:45.064827921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wtdj8,Uid:607c88f0-0a8d-44f7-a2c3-78c6d3023649,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9cd1d38c9b59af7a97b3868161361ef70e47575a96a55ae032171bf4ca67a88\"" May 14 00:00:45.065536 kubelet[2707]: E0514 00:00:45.065512 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:45.069755 containerd[1514]: time="2025-05-14T00:00:45.067975033Z" level=info msg="CreateContainer within sandbox \"e9cd1d38c9b59af7a97b3868161361ef70e47575a96a55ae032171bf4ca67a88\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:00:45.091109 containerd[1514]: time="2025-05-14T00:00:45.091060608Z" level=info msg="CreateContainer within sandbox \"e9cd1d38c9b59af7a97b3868161361ef70e47575a96a55ae032171bf4ca67a88\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3b2a82494e770d28bc3d4d361bc903fc03d4e4c09bb9019ba6ca3de633228abe\"" May 14 00:00:45.091624 containerd[1514]: time="2025-05-14T00:00:45.091579488Z" level=info msg="StartContainer for \"3b2a82494e770d28bc3d4d361bc903fc03d4e4c09bb9019ba6ca3de633228abe\"" May 14 00:00:45.121860 systemd[1]: Started cri-containerd-3b2a82494e770d28bc3d4d361bc903fc03d4e4c09bb9019ba6ca3de633228abe.scope - libcontainer container 3b2a82494e770d28bc3d4d361bc903fc03d4e4c09bb9019ba6ca3de633228abe. May 14 00:00:45.158341 containerd[1514]: time="2025-05-14T00:00:45.158283708Z" level=info msg="StartContainer for \"3b2a82494e770d28bc3d4d361bc903fc03d4e4c09bb9019ba6ca3de633228abe\" returns successfully" May 14 00:00:45.187344 kubelet[2707]: I0514 00:00:45.187292 2707 topology_manager.go:215] "Topology Admit Handler" podUID="ccb9f97d-932f-4b19-8d3a-56774eb6810a" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-gpt54" May 14 00:00:45.190839 kubelet[2707]: W0514 00:00:45.189132 2707 reflector.go:547] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object May 14 00:00:45.190839 kubelet[2707]: E0514 00:00:45.190287 2707 reflector.go:150] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object May 14 00:00:45.198184 systemd[1]: Created slice kubepods-besteffort-podccb9f97d_932f_4b19_8d3a_56774eb6810a.slice - libcontainer container kubepods-besteffort-podccb9f97d_932f_4b19_8d3a_56774eb6810a.slice. May 14 00:00:45.216659 kubelet[2707]: E0514 00:00:45.216630 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:45.225142 kubelet[2707]: I0514 00:00:45.225075 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wtdj8" podStartSLOduration=1.2250560400000001 podStartE2EDuration="1.22505604s" podCreationTimestamp="2025-05-14 00:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:45.224660236 +0000 UTC m=+17.168891141" watchObservedRunningTime="2025-05-14 00:00:45.22505604 +0000 UTC m=+17.169286945" May 14 00:00:45.230549 kubelet[2707]: I0514 00:00:45.230481 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ccb9f97d-932f-4b19-8d3a-56774eb6810a-var-lib-calico\") pod \"tigera-operator-797db67f8-gpt54\" (UID: \"ccb9f97d-932f-4b19-8d3a-56774eb6810a\") " pod="tigera-operator/tigera-operator-797db67f8-gpt54" May 14 00:00:45.230549 kubelet[2707]: I0514 00:00:45.230536 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj87s\" (UniqueName: \"kubernetes.io/projected/ccb9f97d-932f-4b19-8d3a-56774eb6810a-kube-api-access-hj87s\") pod \"tigera-operator-797db67f8-gpt54\" (UID: \"ccb9f97d-932f-4b19-8d3a-56774eb6810a\") " pod="tigera-operator/tigera-operator-797db67f8-gpt54" May 14 00:00:46.405905 containerd[1514]: time="2025-05-14T00:00:46.405245463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-gpt54,Uid:ccb9f97d-932f-4b19-8d3a-56774eb6810a,Namespace:tigera-operator,Attempt:0,}" May 14 00:00:46.522867 containerd[1514]: time="2025-05-14T00:00:46.522562342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:00:46.522867 containerd[1514]: time="2025-05-14T00:00:46.522795610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:00:46.522867 containerd[1514]: time="2025-05-14T00:00:46.522818972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:00:46.523911 containerd[1514]: time="2025-05-14T00:00:46.523758066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:00:46.555207 systemd[1]: Started cri-containerd-944870ee7e118a5125460bad719ab862f8a3ecc4ca44aa50b1ee0c7b8067f78b.scope - libcontainer container 944870ee7e118a5125460bad719ab862f8a3ecc4ca44aa50b1ee0c7b8067f78b. May 14 00:00:46.622709 containerd[1514]: time="2025-05-14T00:00:46.622624591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-gpt54,Uid:ccb9f97d-932f-4b19-8d3a-56774eb6810a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"944870ee7e118a5125460bad719ab862f8a3ecc4ca44aa50b1ee0c7b8067f78b\"" May 14 00:00:46.627296 containerd[1514]: time="2025-05-14T00:00:46.626545784Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 00:00:49.196382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3771012446.mount: Deactivated successfully. May 14 00:00:49.638310 containerd[1514]: time="2025-05-14T00:00:49.638235663Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:49.639226 containerd[1514]: time="2025-05-14T00:00:49.639184289Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 14 00:00:49.640330 containerd[1514]: time="2025-05-14T00:00:49.640281590Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:49.642807 containerd[1514]: time="2025-05-14T00:00:49.642769630Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:49.643483 containerd[1514]: time="2025-05-14T00:00:49.643454513Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 3.016864631s" May 14 00:00:49.643522 containerd[1514]: time="2025-05-14T00:00:49.643482944Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 14 00:00:49.658730 containerd[1514]: time="2025-05-14T00:00:49.658685833Z" level=info msg="CreateContainer within sandbox \"944870ee7e118a5125460bad719ab862f8a3ecc4ca44aa50b1ee0c7b8067f78b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 00:00:49.675873 containerd[1514]: time="2025-05-14T00:00:49.675827387Z" level=info msg="CreateContainer within sandbox \"944870ee7e118a5125460bad719ab862f8a3ecc4ca44aa50b1ee0c7b8067f78b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"400c09301cc9203f18bdcd0b9d47d6239f75da14bf7ac921cb76ec12f6ce00f0\"" May 14 00:00:49.679427 containerd[1514]: time="2025-05-14T00:00:49.679393836Z" level=info msg="StartContainer for \"400c09301cc9203f18bdcd0b9d47d6239f75da14bf7ac921cb76ec12f6ce00f0\"" May 14 00:00:49.706845 systemd[1]: Started cri-containerd-400c09301cc9203f18bdcd0b9d47d6239f75da14bf7ac921cb76ec12f6ce00f0.scope - libcontainer container 400c09301cc9203f18bdcd0b9d47d6239f75da14bf7ac921cb76ec12f6ce00f0. May 14 00:00:49.736984 containerd[1514]: time="2025-05-14T00:00:49.736931612Z" level=info msg="StartContainer for \"400c09301cc9203f18bdcd0b9d47d6239f75da14bf7ac921cb76ec12f6ce00f0\" returns successfully" May 14 00:00:50.256065 kubelet[2707]: I0514 00:00:50.255995 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-gpt54" podStartSLOduration=2.23279191 podStartE2EDuration="5.255976203s" podCreationTimestamp="2025-05-14 00:00:45 +0000 UTC" firstStartedPulling="2025-05-14 00:00:46.625229623 +0000 UTC m=+18.569460528" lastFinishedPulling="2025-05-14 00:00:49.648413916 +0000 UTC m=+21.592644821" observedRunningTime="2025-05-14 00:00:50.255728718 +0000 UTC m=+22.199959633" watchObservedRunningTime="2025-05-14 00:00:50.255976203 +0000 UTC m=+22.200207119" May 14 00:00:52.900427 kubelet[2707]: I0514 00:00:52.900367 2707 topology_manager.go:215] "Topology Admit Handler" podUID="0bf0ab37-3c1a-44d7-904e-2d6ebf80bd03" podNamespace="calico-system" podName="calico-typha-866fd5d844-zx2qs" May 14 00:00:52.910320 systemd[1]: Created slice kubepods-besteffort-pod0bf0ab37_3c1a_44d7_904e_2d6ebf80bd03.slice - libcontainer container kubepods-besteffort-pod0bf0ab37_3c1a_44d7_904e_2d6ebf80bd03.slice. May 14 00:00:52.946537 kubelet[2707]: I0514 00:00:52.946494 2707 topology_manager.go:215] "Topology Admit Handler" podUID="9d158af9-5ed0-4ac0-b32e-710584d14562" podNamespace="calico-system" podName="calico-node-9mg69" May 14 00:00:52.955205 systemd[1]: Created slice kubepods-besteffort-pod9d158af9_5ed0_4ac0_b32e_710584d14562.slice - libcontainer container kubepods-besteffort-pod9d158af9_5ed0_4ac0_b32e_710584d14562.slice. May 14 00:00:52.994470 kubelet[2707]: I0514 00:00:52.994411 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw8js\" (UniqueName: \"kubernetes.io/projected/9d158af9-5ed0-4ac0-b32e-710584d14562-kube-api-access-gw8js\") pod \"calico-node-9mg69\" (UID: \"9d158af9-5ed0-4ac0-b32e-710584d14562\") " pod="calico-system/calico-node-9mg69" May 14 00:00:52.994470 kubelet[2707]: I0514 00:00:52.994472 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d158af9-5ed0-4ac0-b32e-710584d14562-lib-modules\") pod \"calico-node-9mg69\" (UID: \"9d158af9-5ed0-4ac0-b32e-710584d14562\") " pod="calico-system/calico-node-9mg69" May 14 00:00:52.994649 kubelet[2707]: I0514 00:00:52.994495 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9d158af9-5ed0-4ac0-b32e-710584d14562-flexvol-driver-host\") pod \"calico-node-9mg69\" (UID: \"9d158af9-5ed0-4ac0-b32e-710584d14562\") " pod="calico-system/calico-node-9mg69" May 14 00:00:52.994649 kubelet[2707]: I0514 00:00:52.994512 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9d158af9-5ed0-4ac0-b32e-710584d14562-cni-bin-dir\") pod \"calico-node-9mg69\" (UID: \"9d158af9-5ed0-4ac0-b32e-710584d14562\") " pod="calico-system/calico-node-9mg69" May 14 00:00:52.994649 kubelet[2707]: I0514 00:00:52.994528 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bf0ab37-3c1a-44d7-904e-2d6ebf80bd03-tigera-ca-bundle\") pod \"calico-typha-866fd5d844-zx2qs\" (UID: \"0bf0ab37-3c1a-44d7-904e-2d6ebf80bd03\") " pod="calico-system/calico-typha-866fd5d844-zx2qs" May 14 00:00:52.994649 kubelet[2707]: I0514 00:00:52.994542 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0bf0ab37-3c1a-44d7-904e-2d6ebf80bd03-typha-certs\") pod \"calico-typha-866fd5d844-zx2qs\" (UID: \"0bf0ab37-3c1a-44d7-904e-2d6ebf80bd03\") " pod="calico-system/calico-typha-866fd5d844-zx2qs" May 14 00:00:52.994649 kubelet[2707]: I0514 00:00:52.994557 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9d158af9-5ed0-4ac0-b32e-710584d14562-var-run-calico\") pod \"calico-node-9mg69\" (UID: \"9d158af9-5ed0-4ac0-b32e-710584d14562\") " pod="calico-system/calico-node-9mg69" May 14 00:00:52.994808 kubelet[2707]: I0514 00:00:52.994570 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9d158af9-5ed0-4ac0-b32e-710584d14562-var-lib-calico\") pod \"calico-node-9mg69\" (UID: \"9d158af9-5ed0-4ac0-b32e-710584d14562\") " pod="calico-system/calico-node-9mg69" May 14 00:00:52.994808 kubelet[2707]: I0514 00:00:52.994586 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk2vm\" (UniqueName: \"kubernetes.io/projected/0bf0ab37-3c1a-44d7-904e-2d6ebf80bd03-kube-api-access-vk2vm\") pod \"calico-typha-866fd5d844-zx2qs\" (UID: \"0bf0ab37-3c1a-44d7-904e-2d6ebf80bd03\") " pod="calico-system/calico-typha-866fd5d844-zx2qs" May 14 00:00:52.994808 kubelet[2707]: I0514 00:00:52.994602 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9d158af9-5ed0-4ac0-b32e-710584d14562-cni-net-dir\") pod \"calico-node-9mg69\" (UID: \"9d158af9-5ed0-4ac0-b32e-710584d14562\") " pod="calico-system/calico-node-9mg69" May 14 00:00:52.994808 kubelet[2707]: I0514 00:00:52.994617 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9d158af9-5ed0-4ac0-b32e-710584d14562-cni-log-dir\") pod \"calico-node-9mg69\" (UID: \"9d158af9-5ed0-4ac0-b32e-710584d14562\") " pod="calico-system/calico-node-9mg69" May 14 00:00:52.994808 kubelet[2707]: I0514 00:00:52.994632 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d158af9-5ed0-4ac0-b32e-710584d14562-tigera-ca-bundle\") pod \"calico-node-9mg69\" (UID: \"9d158af9-5ed0-4ac0-b32e-710584d14562\") " pod="calico-system/calico-node-9mg69" May 14 00:00:52.994944 kubelet[2707]: I0514 00:00:52.994648 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d158af9-5ed0-4ac0-b32e-710584d14562-xtables-lock\") pod \"calico-node-9mg69\" (UID: \"9d158af9-5ed0-4ac0-b32e-710584d14562\") " pod="calico-system/calico-node-9mg69" May 14 00:00:52.994944 kubelet[2707]: I0514 00:00:52.994662 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9d158af9-5ed0-4ac0-b32e-710584d14562-policysync\") pod \"calico-node-9mg69\" (UID: \"9d158af9-5ed0-4ac0-b32e-710584d14562\") " pod="calico-system/calico-node-9mg69" May 14 00:00:52.994944 kubelet[2707]: I0514 00:00:52.994692 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9d158af9-5ed0-4ac0-b32e-710584d14562-node-certs\") pod \"calico-node-9mg69\" (UID: \"9d158af9-5ed0-4ac0-b32e-710584d14562\") " pod="calico-system/calico-node-9mg69" May 14 00:00:53.057381 kubelet[2707]: I0514 00:00:53.057334 2707 topology_manager.go:215] "Topology Admit Handler" podUID="fff7752d-bd43-4c8e-a187-4d071bd5cd0b" podNamespace="calico-system" podName="csi-node-driver-s8nw4" May 14 00:00:53.057662 kubelet[2707]: E0514 00:00:53.057639 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nw4" podUID="fff7752d-bd43-4c8e-a187-4d071bd5cd0b" May 14 00:00:53.096416 kubelet[2707]: I0514 00:00:53.095501 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb75h\" (UniqueName: \"kubernetes.io/projected/fff7752d-bd43-4c8e-a187-4d071bd5cd0b-kube-api-access-lb75h\") pod \"csi-node-driver-s8nw4\" (UID: \"fff7752d-bd43-4c8e-a187-4d071bd5cd0b\") " pod="calico-system/csi-node-driver-s8nw4" May 14 00:00:53.096416 kubelet[2707]: I0514 00:00:53.095556 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fff7752d-bd43-4c8e-a187-4d071bd5cd0b-socket-dir\") pod \"csi-node-driver-s8nw4\" (UID: \"fff7752d-bd43-4c8e-a187-4d071bd5cd0b\") " pod="calico-system/csi-node-driver-s8nw4" May 14 00:00:53.096416 kubelet[2707]: I0514 00:00:53.095646 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fff7752d-bd43-4c8e-a187-4d071bd5cd0b-kubelet-dir\") pod \"csi-node-driver-s8nw4\" (UID: \"fff7752d-bd43-4c8e-a187-4d071bd5cd0b\") " pod="calico-system/csi-node-driver-s8nw4" May 14 00:00:53.096416 kubelet[2707]: I0514 00:00:53.095733 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fff7752d-bd43-4c8e-a187-4d071bd5cd0b-varrun\") pod \"csi-node-driver-s8nw4\" (UID: \"fff7752d-bd43-4c8e-a187-4d071bd5cd0b\") " pod="calico-system/csi-node-driver-s8nw4" May 14 00:00:53.096416 kubelet[2707]: I0514 00:00:53.095767 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fff7752d-bd43-4c8e-a187-4d071bd5cd0b-registration-dir\") pod \"csi-node-driver-s8nw4\" (UID: \"fff7752d-bd43-4c8e-a187-4d071bd5cd0b\") " pod="calico-system/csi-node-driver-s8nw4" May 14 00:00:53.097751 kubelet[2707]: E0514 00:00:53.097710 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.097751 kubelet[2707]: W0514 00:00:53.097744 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.097848 kubelet[2707]: E0514 00:00:53.097765 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.098797 kubelet[2707]: E0514 00:00:53.098752 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.098797 kubelet[2707]: W0514 00:00:53.098771 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.098797 kubelet[2707]: E0514 00:00:53.098784 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.103605 kubelet[2707]: E0514 00:00:53.103581 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.103734 kubelet[2707]: W0514 00:00:53.103716 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.103821 kubelet[2707]: E0514 00:00:53.103804 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.106661 kubelet[2707]: E0514 00:00:53.106488 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.106661 kubelet[2707]: W0514 00:00:53.106506 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.106661 kubelet[2707]: E0514 00:00:53.106533 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.109106 kubelet[2707]: E0514 00:00:53.109088 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.109202 kubelet[2707]: W0514 00:00:53.109186 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.109279 kubelet[2707]: E0514 00:00:53.109264 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.113403 kubelet[2707]: E0514 00:00:53.113371 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.113403 kubelet[2707]: W0514 00:00:53.113394 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.113517 kubelet[2707]: E0514 00:00:53.113416 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.196518 kubelet[2707]: E0514 00:00:53.196396 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.196518 kubelet[2707]: W0514 00:00:53.196422 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.196518 kubelet[2707]: E0514 00:00:53.196448 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.196783 kubelet[2707]: E0514 00:00:53.196742 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.196783 kubelet[2707]: W0514 00:00:53.196750 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.196783 kubelet[2707]: E0514 00:00:53.196762 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.197171 kubelet[2707]: E0514 00:00:53.197147 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.197218 kubelet[2707]: W0514 00:00:53.197195 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.197256 kubelet[2707]: E0514 00:00:53.197221 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.197462 kubelet[2707]: E0514 00:00:53.197443 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.197462 kubelet[2707]: W0514 00:00:53.197453 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.197462 kubelet[2707]: E0514 00:00:53.197462 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.197742 kubelet[2707]: E0514 00:00:53.197727 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.197742 kubelet[2707]: W0514 00:00:53.197740 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.197837 kubelet[2707]: E0514 00:00:53.197752 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.198161 kubelet[2707]: E0514 00:00:53.198142 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.198161 kubelet[2707]: W0514 00:00:53.198157 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.198265 kubelet[2707]: E0514 00:00:53.198174 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.198512 kubelet[2707]: E0514 00:00:53.198497 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.198512 kubelet[2707]: W0514 00:00:53.198509 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.198605 kubelet[2707]: E0514 00:00:53.198526 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.198781 kubelet[2707]: E0514 00:00:53.198754 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.198781 kubelet[2707]: W0514 00:00:53.198766 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.198781 kubelet[2707]: E0514 00:00:53.198779 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.199002 kubelet[2707]: E0514 00:00:53.198987 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.199002 kubelet[2707]: W0514 00:00:53.199000 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.199084 kubelet[2707]: E0514 00:00:53.199014 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.199266 kubelet[2707]: E0514 00:00:53.199251 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.199266 kubelet[2707]: W0514 00:00:53.199262 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.199354 kubelet[2707]: E0514 00:00:53.199303 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.199521 kubelet[2707]: E0514 00:00:53.199506 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.199521 kubelet[2707]: W0514 00:00:53.199518 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.199597 kubelet[2707]: E0514 00:00:53.199538 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.199841 kubelet[2707]: E0514 00:00:53.199822 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.199841 kubelet[2707]: W0514 00:00:53.199835 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.200011 kubelet[2707]: E0514 00:00:53.199866 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.200405 kubelet[2707]: E0514 00:00:53.200373 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.200459 kubelet[2707]: W0514 00:00:53.200421 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.200459 kubelet[2707]: E0514 00:00:53.200448 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.200790 kubelet[2707]: E0514 00:00:53.200725 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.200790 kubelet[2707]: W0514 00:00:53.200741 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.200790 kubelet[2707]: E0514 00:00:53.200759 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.201104 kubelet[2707]: E0514 00:00:53.201069 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.201159 kubelet[2707]: W0514 00:00:53.201116 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.201159 kubelet[2707]: E0514 00:00:53.201138 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.201427 kubelet[2707]: E0514 00:00:53.201411 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.201427 kubelet[2707]: W0514 00:00:53.201423 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.201536 kubelet[2707]: E0514 00:00:53.201521 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.201708 kubelet[2707]: E0514 00:00:53.201684 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.201708 kubelet[2707]: W0514 00:00:53.201699 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.201853 kubelet[2707]: E0514 00:00:53.201827 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.201925 kubelet[2707]: E0514 00:00:53.201912 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.201925 kubelet[2707]: W0514 00:00:53.201923 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.201972 kubelet[2707]: E0514 00:00:53.201939 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.202203 kubelet[2707]: E0514 00:00:53.202188 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.202203 kubelet[2707]: W0514 00:00:53.202201 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.202271 kubelet[2707]: E0514 00:00:53.202217 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.202468 kubelet[2707]: E0514 00:00:53.202451 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.202468 kubelet[2707]: W0514 00:00:53.202464 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.202546 kubelet[2707]: E0514 00:00:53.202481 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.202760 kubelet[2707]: E0514 00:00:53.202744 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.202760 kubelet[2707]: W0514 00:00:53.202757 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.202847 kubelet[2707]: E0514 00:00:53.202774 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.203070 kubelet[2707]: E0514 00:00:53.203010 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.203070 kubelet[2707]: W0514 00:00:53.203020 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.203287 kubelet[2707]: E0514 00:00:53.203101 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.203287 kubelet[2707]: E0514 00:00:53.203269 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.203287 kubelet[2707]: W0514 00:00:53.203282 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.203381 kubelet[2707]: E0514 00:00:53.203295 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.203561 kubelet[2707]: E0514 00:00:53.203542 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.203561 kubelet[2707]: W0514 00:00:53.203554 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.203561 kubelet[2707]: E0514 00:00:53.203562 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.203877 kubelet[2707]: E0514 00:00:53.203848 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.203877 kubelet[2707]: W0514 00:00:53.203866 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.203877 kubelet[2707]: E0514 00:00:53.203877 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.210637 kubelet[2707]: E0514 00:00:53.210530 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:53.210637 kubelet[2707]: W0514 00:00:53.210555 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:53.210637 kubelet[2707]: E0514 00:00:53.210579 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:53.214708 kubelet[2707]: E0514 00:00:53.214662 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:53.215376 containerd[1514]: time="2025-05-14T00:00:53.215314076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-866fd5d844-zx2qs,Uid:0bf0ab37-3c1a-44d7-904e-2d6ebf80bd03,Namespace:calico-system,Attempt:0,}" May 14 00:00:53.240343 containerd[1514]: time="2025-05-14T00:00:53.240203021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:00:53.240501 containerd[1514]: time="2025-05-14T00:00:53.240376942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:00:53.240501 containerd[1514]: time="2025-05-14T00:00:53.240455890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:00:53.241363 containerd[1514]: time="2025-05-14T00:00:53.241285813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:00:53.259229 kubelet[2707]: E0514 00:00:53.258994 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:53.260554 containerd[1514]: time="2025-05-14T00:00:53.259345986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9mg69,Uid:9d158af9-5ed0-4ac0-b32e-710584d14562,Namespace:calico-system,Attempt:0,}" May 14 00:00:53.262861 systemd[1]: Started cri-containerd-5748de2bb9001e8b7b549310a535644c473445ac20f6f91ebdbc602c3824db24.scope - libcontainer container 5748de2bb9001e8b7b549310a535644c473445ac20f6f91ebdbc602c3824db24. May 14 00:00:53.290640 containerd[1514]: time="2025-05-14T00:00:53.290530220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:00:53.290640 containerd[1514]: time="2025-05-14T00:00:53.290585259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:00:53.290640 containerd[1514]: time="2025-05-14T00:00:53.290598376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:00:53.290876 containerd[1514]: time="2025-05-14T00:00:53.290697929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:00:53.313911 systemd[1]: Started cri-containerd-bc4537e4ad0d97cc297accc4acc5b39ae8043689d3e15efd0ef76fc189c36e51.scope - libcontainer container bc4537e4ad0d97cc297accc4acc5b39ae8043689d3e15efd0ef76fc189c36e51. May 14 00:00:53.314400 containerd[1514]: time="2025-05-14T00:00:53.314357508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-866fd5d844-zx2qs,Uid:0bf0ab37-3c1a-44d7-904e-2d6ebf80bd03,Namespace:calico-system,Attempt:0,} returns sandbox id \"5748de2bb9001e8b7b549310a535644c473445ac20f6f91ebdbc602c3824db24\"" May 14 00:00:53.315328 kubelet[2707]: E0514 00:00:53.315301 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:53.317791 containerd[1514]: time="2025-05-14T00:00:53.317654238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 00:00:53.343051 containerd[1514]: time="2025-05-14T00:00:53.342962316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9mg69,Uid:9d158af9-5ed0-4ac0-b32e-710584d14562,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc4537e4ad0d97cc297accc4acc5b39ae8043689d3e15efd0ef76fc189c36e51\"" May 14 00:00:53.344250 kubelet[2707]: E0514 00:00:53.343714 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:54.147196 kubelet[2707]: E0514 00:00:54.147150 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nw4" podUID="fff7752d-bd43-4c8e-a187-4d071bd5cd0b" May 14 00:00:56.146783 kubelet[2707]: E0514 00:00:56.146658 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nw4" podUID="fff7752d-bd43-4c8e-a187-4d071bd5cd0b" May 14 00:00:56.828611 containerd[1514]: time="2025-05-14T00:00:56.828551995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:56.829583 containerd[1514]: time="2025-05-14T00:00:56.829353486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 14 00:00:56.830707 containerd[1514]: time="2025-05-14T00:00:56.830651584Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:56.833309 containerd[1514]: time="2025-05-14T00:00:56.833261820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:56.833969 containerd[1514]: time="2025-05-14T00:00:56.833897782Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.516179358s" May 14 00:00:56.833969 containerd[1514]: time="2025-05-14T00:00:56.833944611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 14 00:00:56.836890 containerd[1514]: time="2025-05-14T00:00:56.836243008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 00:00:56.844839 containerd[1514]: time="2025-05-14T00:00:56.844793976Z" level=info msg="CreateContainer within sandbox \"5748de2bb9001e8b7b549310a535644c473445ac20f6f91ebdbc602c3824db24\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 00:00:56.866859 containerd[1514]: time="2025-05-14T00:00:56.866800105Z" level=info msg="CreateContainer within sandbox \"5748de2bb9001e8b7b549310a535644c473445ac20f6f91ebdbc602c3824db24\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0e3c1b39da2aeb5f28ea5cec8979884838a52373dd39c2b99c6cca0f60cbf420\"" May 14 00:00:56.867355 containerd[1514]: time="2025-05-14T00:00:56.867326926Z" level=info msg="StartContainer for \"0e3c1b39da2aeb5f28ea5cec8979884838a52373dd39c2b99c6cca0f60cbf420\"" May 14 00:00:56.895869 systemd[1]: Started cri-containerd-0e3c1b39da2aeb5f28ea5cec8979884838a52373dd39c2b99c6cca0f60cbf420.scope - libcontainer container 0e3c1b39da2aeb5f28ea5cec8979884838a52373dd39c2b99c6cca0f60cbf420. May 14 00:00:57.051207 containerd[1514]: time="2025-05-14T00:00:57.051151651Z" level=info msg="StartContainer for \"0e3c1b39da2aeb5f28ea5cec8979884838a52373dd39c2b99c6cca0f60cbf420\" returns successfully" May 14 00:00:57.249288 kubelet[2707]: E0514 00:00:57.248755 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:57.295582 kubelet[2707]: E0514 00:00:57.295539 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.295582 kubelet[2707]: W0514 00:00:57.295564 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.295582 kubelet[2707]: E0514 00:00:57.295587 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.296000 kubelet[2707]: E0514 00:00:57.295987 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.296054 kubelet[2707]: W0514 00:00:57.296000 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.296054 kubelet[2707]: E0514 00:00:57.296022 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.296412 kubelet[2707]: E0514 00:00:57.296393 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.296412 kubelet[2707]: W0514 00:00:57.296408 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.296533 kubelet[2707]: E0514 00:00:57.296419 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.296697 kubelet[2707]: E0514 00:00:57.296685 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.296697 kubelet[2707]: W0514 00:00:57.296695 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.296802 kubelet[2707]: E0514 00:00:57.296706 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.296998 kubelet[2707]: E0514 00:00:57.296975 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.296998 kubelet[2707]: W0514 00:00:57.296987 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.296998 kubelet[2707]: E0514 00:00:57.296996 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.297378 kubelet[2707]: E0514 00:00:57.297344 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.297378 kubelet[2707]: W0514 00:00:57.297378 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.297461 kubelet[2707]: E0514 00:00:57.297410 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.297744 kubelet[2707]: E0514 00:00:57.297723 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.297744 kubelet[2707]: W0514 00:00:57.297735 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.297832 kubelet[2707]: E0514 00:00:57.297748 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.298030 kubelet[2707]: E0514 00:00:57.297994 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.298030 kubelet[2707]: W0514 00:00:57.298008 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.298030 kubelet[2707]: E0514 00:00:57.298020 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.298260 kubelet[2707]: E0514 00:00:57.298244 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.298260 kubelet[2707]: W0514 00:00:57.298257 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.298403 kubelet[2707]: E0514 00:00:57.298269 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.298526 kubelet[2707]: E0514 00:00:57.298488 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.298526 kubelet[2707]: W0514 00:00:57.298499 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.298526 kubelet[2707]: E0514 00:00:57.298511 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.298863 kubelet[2707]: E0514 00:00:57.298826 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.298863 kubelet[2707]: W0514 00:00:57.298849 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.298954 kubelet[2707]: E0514 00:00:57.298867 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.299170 kubelet[2707]: E0514 00:00:57.299154 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.299170 kubelet[2707]: W0514 00:00:57.299167 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.299229 kubelet[2707]: E0514 00:00:57.299177 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.299371 kubelet[2707]: E0514 00:00:57.299359 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.299371 kubelet[2707]: W0514 00:00:57.299368 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.299492 kubelet[2707]: E0514 00:00:57.299376 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.299633 kubelet[2707]: E0514 00:00:57.299614 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.299633 kubelet[2707]: W0514 00:00:57.299627 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.299720 kubelet[2707]: E0514 00:00:57.299638 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.299997 kubelet[2707]: E0514 00:00:57.299967 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.299997 kubelet[2707]: W0514 00:00:57.299982 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.299997 kubelet[2707]: E0514 00:00:57.299995 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.330765 kubelet[2707]: E0514 00:00:57.330531 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.330765 kubelet[2707]: W0514 00:00:57.330555 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.330765 kubelet[2707]: E0514 00:00:57.330580 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.331107 kubelet[2707]: E0514 00:00:57.331053 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.331107 kubelet[2707]: W0514 00:00:57.331090 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.331213 kubelet[2707]: E0514 00:00:57.331126 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.331512 kubelet[2707]: E0514 00:00:57.331482 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.331512 kubelet[2707]: W0514 00:00:57.331508 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.331609 kubelet[2707]: E0514 00:00:57.331526 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.331745 kubelet[2707]: E0514 00:00:57.331731 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.331745 kubelet[2707]: W0514 00:00:57.331741 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.331851 kubelet[2707]: E0514 00:00:57.331754 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.332013 kubelet[2707]: E0514 00:00:57.331998 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.332013 kubelet[2707]: W0514 00:00:57.332010 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.332104 kubelet[2707]: E0514 00:00:57.332027 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.332270 kubelet[2707]: E0514 00:00:57.332254 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.332270 kubelet[2707]: W0514 00:00:57.332266 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.332382 kubelet[2707]: E0514 00:00:57.332281 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.332813 kubelet[2707]: E0514 00:00:57.332775 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.332871 kubelet[2707]: W0514 00:00:57.332811 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.332871 kubelet[2707]: E0514 00:00:57.332845 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.333240 kubelet[2707]: E0514 00:00:57.333222 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.333240 kubelet[2707]: W0514 00:00:57.333236 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.333337 kubelet[2707]: E0514 00:00:57.333309 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.333493 kubelet[2707]: E0514 00:00:57.333477 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.333493 kubelet[2707]: W0514 00:00:57.333490 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.333566 kubelet[2707]: E0514 00:00:57.333532 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.333764 kubelet[2707]: E0514 00:00:57.333749 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.333841 kubelet[2707]: W0514 00:00:57.333808 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.333841 kubelet[2707]: E0514 00:00:57.333833 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.334107 kubelet[2707]: E0514 00:00:57.334076 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.334107 kubelet[2707]: W0514 00:00:57.334089 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.334188 kubelet[2707]: E0514 00:00:57.334107 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.334327 kubelet[2707]: E0514 00:00:57.334310 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.334327 kubelet[2707]: W0514 00:00:57.334324 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.334397 kubelet[2707]: E0514 00:00:57.334349 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.334606 kubelet[2707]: E0514 00:00:57.334570 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.334606 kubelet[2707]: W0514 00:00:57.334601 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.334737 kubelet[2707]: E0514 00:00:57.334619 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.335014 kubelet[2707]: E0514 00:00:57.334989 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.335014 kubelet[2707]: W0514 00:00:57.335002 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.335014 kubelet[2707]: E0514 00:00:57.335017 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.335271 kubelet[2707]: E0514 00:00:57.335253 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.335271 kubelet[2707]: W0514 00:00:57.335265 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.335347 kubelet[2707]: E0514 00:00:57.335289 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.336001 kubelet[2707]: E0514 00:00:57.335982 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.336001 kubelet[2707]: W0514 00:00:57.335994 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.336093 kubelet[2707]: E0514 00:00:57.336009 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.336395 kubelet[2707]: E0514 00:00:57.336358 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.336395 kubelet[2707]: W0514 00:00:57.336389 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.336468 kubelet[2707]: E0514 00:00:57.336409 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:57.336710 kubelet[2707]: E0514 00:00:57.336665 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:57.336710 kubelet[2707]: W0514 00:00:57.336707 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:57.336793 kubelet[2707]: E0514 00:00:57.336717 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.146977 kubelet[2707]: E0514 00:00:58.146910 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nw4" podUID="fff7752d-bd43-4c8e-a187-4d071bd5cd0b" May 14 00:00:58.249663 kubelet[2707]: I0514 00:00:58.249626 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:00:58.250322 kubelet[2707]: E0514 00:00:58.250290 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:00:58.308000 kubelet[2707]: E0514 00:00:58.307961 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.308000 kubelet[2707]: W0514 00:00:58.307982 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.308000 kubelet[2707]: E0514 00:00:58.308003 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.308314 kubelet[2707]: E0514 00:00:58.308299 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.308353 kubelet[2707]: W0514 00:00:58.308315 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.308353 kubelet[2707]: E0514 00:00:58.308327 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.308530 kubelet[2707]: E0514 00:00:58.308518 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.308576 kubelet[2707]: W0514 00:00:58.308529 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.308576 kubelet[2707]: E0514 00:00:58.308539 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.308790 kubelet[2707]: E0514 00:00:58.308778 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.308790 kubelet[2707]: W0514 00:00:58.308788 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.308855 kubelet[2707]: E0514 00:00:58.308798 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.309004 kubelet[2707]: E0514 00:00:58.308989 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.309004 kubelet[2707]: W0514 00:00:58.309000 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.309078 kubelet[2707]: E0514 00:00:58.309010 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.309195 kubelet[2707]: E0514 00:00:58.309182 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.309195 kubelet[2707]: W0514 00:00:58.309193 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.309253 kubelet[2707]: E0514 00:00:58.309204 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.309386 kubelet[2707]: E0514 00:00:58.309374 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.309431 kubelet[2707]: W0514 00:00:58.309384 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.309431 kubelet[2707]: E0514 00:00:58.309403 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.309591 kubelet[2707]: E0514 00:00:58.309577 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.309591 kubelet[2707]: W0514 00:00:58.309588 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.309657 kubelet[2707]: E0514 00:00:58.309598 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.309817 kubelet[2707]: E0514 00:00:58.309792 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.309817 kubelet[2707]: W0514 00:00:58.309804 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.309817 kubelet[2707]: E0514 00:00:58.309813 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.310066 kubelet[2707]: E0514 00:00:58.309981 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.310066 kubelet[2707]: W0514 00:00:58.309988 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.310066 kubelet[2707]: E0514 00:00:58.309997 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.310205 kubelet[2707]: E0514 00:00:58.310189 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.310205 kubelet[2707]: W0514 00:00:58.310199 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.310263 kubelet[2707]: E0514 00:00:58.310209 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.310387 kubelet[2707]: E0514 00:00:58.310373 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.310387 kubelet[2707]: W0514 00:00:58.310383 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.310450 kubelet[2707]: E0514 00:00:58.310391 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.310591 kubelet[2707]: E0514 00:00:58.310567 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.310591 kubelet[2707]: W0514 00:00:58.310577 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.310591 kubelet[2707]: E0514 00:00:58.310584 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.310805 kubelet[2707]: E0514 00:00:58.310782 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.310805 kubelet[2707]: W0514 00:00:58.310795 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.310805 kubelet[2707]: E0514 00:00:58.310803 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.310978 kubelet[2707]: E0514 00:00:58.310964 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.310978 kubelet[2707]: W0514 00:00:58.310974 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.311046 kubelet[2707]: E0514 00:00:58.310981 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.340312 kubelet[2707]: E0514 00:00:58.340268 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.340312 kubelet[2707]: W0514 00:00:58.340298 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.340540 kubelet[2707]: E0514 00:00:58.340343 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.340689 kubelet[2707]: E0514 00:00:58.340634 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.340689 kubelet[2707]: W0514 00:00:58.340647 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.340689 kubelet[2707]: E0514 00:00:58.340686 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.341297 kubelet[2707]: E0514 00:00:58.341247 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.341297 kubelet[2707]: W0514 00:00:58.341278 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.341297 kubelet[2707]: E0514 00:00:58.341293 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.341786 kubelet[2707]: E0514 00:00:58.341645 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.341940 kubelet[2707]: W0514 00:00:58.341690 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.342280 kubelet[2707]: E0514 00:00:58.342127 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.342577 kubelet[2707]: E0514 00:00:58.342542 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.342577 kubelet[2707]: W0514 00:00:58.342559 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.343470 kubelet[2707]: E0514 00:00:58.342580 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.343526 kubelet[2707]: E0514 00:00:58.342938 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.343526 kubelet[2707]: W0514 00:00:58.343491 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.343615 kubelet[2707]: E0514 00:00:58.343529 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.343836 kubelet[2707]: E0514 00:00:58.343819 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.343836 kubelet[2707]: W0514 00:00:58.343832 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.343944 kubelet[2707]: E0514 00:00:58.343914 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.344106 kubelet[2707]: E0514 00:00:58.344078 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.344247 kubelet[2707]: W0514 00:00:58.344218 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.344375 kubelet[2707]: E0514 00:00:58.344343 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.344597 kubelet[2707]: E0514 00:00:58.344568 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.344597 kubelet[2707]: W0514 00:00:58.344580 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.344797 kubelet[2707]: E0514 00:00:58.344767 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.346018 kubelet[2707]: E0514 00:00:58.345988 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.346018 kubelet[2707]: W0514 00:00:58.346012 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.346482 kubelet[2707]: E0514 00:00:58.346404 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.347223 kubelet[2707]: E0514 00:00:58.347186 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.347287 kubelet[2707]: W0514 00:00:58.347225 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.347327 kubelet[2707]: E0514 00:00:58.347293 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.347521 kubelet[2707]: E0514 00:00:58.347483 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.347521 kubelet[2707]: W0514 00:00:58.347496 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.347753 kubelet[2707]: E0514 00:00:58.347554 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.347922 kubelet[2707]: E0514 00:00:58.347902 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.347979 kubelet[2707]: W0514 00:00:58.347920 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.348142 kubelet[2707]: E0514 00:00:58.348072 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.348454 kubelet[2707]: E0514 00:00:58.348438 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.348454 kubelet[2707]: W0514 00:00:58.348451 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.348546 kubelet[2707]: E0514 00:00:58.348477 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.348865 kubelet[2707]: E0514 00:00:58.348842 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.348865 kubelet[2707]: W0514 00:00:58.348860 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.348944 kubelet[2707]: E0514 00:00:58.348871 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.351609 kubelet[2707]: E0514 00:00:58.351097 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.351609 kubelet[2707]: W0514 00:00:58.351120 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.351609 kubelet[2707]: E0514 00:00:58.351143 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.351793 kubelet[2707]: E0514 00:00:58.351741 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.351793 kubelet[2707]: W0514 00:00:58.351752 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.351963 kubelet[2707]: E0514 00:00:58.351939 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.354041 kubelet[2707]: E0514 00:00:58.352853 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:00:58.354041 kubelet[2707]: W0514 00:00:58.352870 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:00:58.354041 kubelet[2707]: E0514 00:00:58.352883 2707 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:00:58.366883 systemd[1]: Started sshd@7-10.0.0.99:22-10.0.0.1:59398.service - OpenSSH per-connection server daemon (10.0.0.1:59398). May 14 00:00:58.409149 sshd[3372]: Accepted publickey for core from 10.0.0.1 port 59398 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:00:58.411116 sshd-session[3372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:58.419038 systemd-logind[1492]: New session 8 of user core. May 14 00:00:58.428987 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 00:00:58.554476 sshd[3374]: Connection closed by 10.0.0.1 port 59398 May 14 00:00:58.554859 sshd-session[3372]: pam_unix(sshd:session): session closed for user core May 14 00:00:58.559887 systemd[1]: sshd@7-10.0.0.99:22-10.0.0.1:59398.service: Deactivated successfully. May 14 00:00:58.561905 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:00:58.562865 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. May 14 00:00:58.564127 systemd-logind[1492]: Removed session 8. May 14 00:00:59.634378 containerd[1514]: time="2025-05-14T00:00:59.634221819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:59.635508 containerd[1514]: time="2025-05-14T00:00:59.635439501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 14 00:00:59.636889 containerd[1514]: time="2025-05-14T00:00:59.636859868Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:59.639957 containerd[1514]: time="2025-05-14T00:00:59.639828246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:59.640752 containerd[1514]: time="2025-05-14T00:00:59.640654941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.80437711s" May 14 00:00:59.640752 containerd[1514]: time="2025-05-14T00:00:59.640709416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 14 00:00:59.642968 containerd[1514]: time="2025-05-14T00:00:59.642921084Z" level=info msg="CreateContainer within sandbox \"bc4537e4ad0d97cc297accc4acc5b39ae8043689d3e15efd0ef76fc189c36e51\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 00:00:59.672148 containerd[1514]: time="2025-05-14T00:00:59.672081062Z" level=info msg="CreateContainer within sandbox \"bc4537e4ad0d97cc297accc4acc5b39ae8043689d3e15efd0ef76fc189c36e51\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aa93472a88c37340834c2990d4ad399d66187d8f38c85f18e08c561660569cb4\"" May 14 00:00:59.672829 containerd[1514]: time="2025-05-14T00:00:59.672783499Z" level=info msg="StartContainer for \"aa93472a88c37340834c2990d4ad399d66187d8f38c85f18e08c561660569cb4\"" May 14 00:00:59.715968 systemd[1]: Started cri-containerd-aa93472a88c37340834c2990d4ad399d66187d8f38c85f18e08c561660569cb4.scope - libcontainer container aa93472a88c37340834c2990d4ad399d66187d8f38c85f18e08c561660569cb4. May 14 00:00:59.757123 containerd[1514]: time="2025-05-14T00:00:59.757071569Z" level=info msg="StartContainer for \"aa93472a88c37340834c2990d4ad399d66187d8f38c85f18e08c561660569cb4\" returns successfully" May 14 00:00:59.771611 systemd[1]: cri-containerd-aa93472a88c37340834c2990d4ad399d66187d8f38c85f18e08c561660569cb4.scope: Deactivated successfully. May 14 00:00:59.799892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa93472a88c37340834c2990d4ad399d66187d8f38c85f18e08c561660569cb4-rootfs.mount: Deactivated successfully. May 14 00:01:00.147072 kubelet[2707]: E0514 00:01:00.146996 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nw4" podUID="fff7752d-bd43-4c8e-a187-4d071bd5cd0b" May 14 00:01:00.255866 kubelet[2707]: E0514 00:01:00.255780 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:00.375596 kubelet[2707]: I0514 00:01:00.375503 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-866fd5d844-zx2qs" podStartSLOduration=4.856741864 podStartE2EDuration="8.375484047s" podCreationTimestamp="2025-05-14 00:00:52 +0000 UTC" firstStartedPulling="2025-05-14 00:00:53.317353806 +0000 UTC m=+25.261584711" lastFinishedPulling="2025-05-14 00:00:56.836095979 +0000 UTC m=+28.780326894" observedRunningTime="2025-05-14 00:00:57.264990852 +0000 UTC m=+29.209221757" watchObservedRunningTime="2025-05-14 00:01:00.375484047 +0000 UTC m=+32.319714962" May 14 00:01:00.584272 containerd[1514]: time="2025-05-14T00:01:00.583852046Z" level=info msg="shim disconnected" id=aa93472a88c37340834c2990d4ad399d66187d8f38c85f18e08c561660569cb4 namespace=k8s.io May 14 00:01:00.584272 containerd[1514]: time="2025-05-14T00:01:00.583910287Z" level=warning msg="cleaning up after shim disconnected" id=aa93472a88c37340834c2990d4ad399d66187d8f38c85f18e08c561660569cb4 namespace=k8s.io May 14 00:01:00.584272 containerd[1514]: time="2025-05-14T00:01:00.583920548Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:01:00.644690 containerd[1514]: time="2025-05-14T00:01:00.644613832Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:01:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 14 00:01:01.263736 kubelet[2707]: E0514 00:01:01.262181 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:01.268784 containerd[1514]: time="2025-05-14T00:01:01.268255673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 00:01:02.147447 kubelet[2707]: E0514 00:01:02.147349 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nw4" podUID="fff7752d-bd43-4c8e-a187-4d071bd5cd0b" May 14 00:01:03.595191 systemd[1]: Started sshd@8-10.0.0.99:22-10.0.0.1:59406.service - OpenSSH per-connection server daemon (10.0.0.1:59406). May 14 00:01:03.692925 sshd[3486]: Accepted publickey for core from 10.0.0.1 port 59406 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:03.695798 sshd-session[3486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:03.712021 systemd-logind[1492]: New session 9 of user core. May 14 00:01:03.721257 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 00:01:03.951507 sshd[3488]: Connection closed by 10.0.0.1 port 59406 May 14 00:01:03.952031 sshd-session[3486]: pam_unix(sshd:session): session closed for user core May 14 00:01:03.964457 systemd[1]: sshd@8-10.0.0.99:22-10.0.0.1:59406.service: Deactivated successfully. May 14 00:01:03.973413 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:01:03.975392 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. May 14 00:01:03.977821 systemd-logind[1492]: Removed session 9. May 14 00:01:04.149227 kubelet[2707]: E0514 00:01:04.149152 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nw4" podUID="fff7752d-bd43-4c8e-a187-4d071bd5cd0b" May 14 00:01:06.149168 kubelet[2707]: E0514 00:01:06.149108 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nw4" podUID="fff7752d-bd43-4c8e-a187-4d071bd5cd0b" May 14 00:01:07.710106 containerd[1514]: time="2025-05-14T00:01:07.710018639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:07.731783 containerd[1514]: time="2025-05-14T00:01:07.731700931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 14 00:01:07.774846 containerd[1514]: time="2025-05-14T00:01:07.774790402Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:07.788455 containerd[1514]: time="2025-05-14T00:01:07.788404352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:07.789111 containerd[1514]: time="2025-05-14T00:01:07.789080716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.520773487s" May 14 00:01:07.789111 containerd[1514]: time="2025-05-14T00:01:07.789107572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 14 00:01:07.791536 containerd[1514]: time="2025-05-14T00:01:07.791486393Z" level=info msg="CreateContainer within sandbox \"bc4537e4ad0d97cc297accc4acc5b39ae8043689d3e15efd0ef76fc189c36e51\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 00:01:07.960715 containerd[1514]: time="2025-05-14T00:01:07.960551774Z" level=info msg="CreateContainer within sandbox \"bc4537e4ad0d97cc297accc4acc5b39ae8043689d3e15efd0ef76fc189c36e51\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c69c478077df1589e1033a2358e3f95fe5d58bb568db41f224acacb161d583ee\"" May 14 00:01:07.961499 containerd[1514]: time="2025-05-14T00:01:07.961458512Z" level=info msg="StartContainer for \"c69c478077df1589e1033a2358e3f95fe5d58bb568db41f224acacb161d583ee\"" May 14 00:01:08.003915 systemd[1]: Started cri-containerd-c69c478077df1589e1033a2358e3f95fe5d58bb568db41f224acacb161d583ee.scope - libcontainer container c69c478077df1589e1033a2358e3f95fe5d58bb568db41f224acacb161d583ee. May 14 00:01:08.136011 containerd[1514]: time="2025-05-14T00:01:08.135894939Z" level=info msg="StartContainer for \"c69c478077df1589e1033a2358e3f95fe5d58bb568db41f224acacb161d583ee\" returns successfully" May 14 00:01:08.147194 kubelet[2707]: E0514 00:01:08.147124 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nw4" podUID="fff7752d-bd43-4c8e-a187-4d071bd5cd0b" May 14 00:01:08.280312 kubelet[2707]: E0514 00:01:08.280182 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:08.967352 systemd[1]: Started sshd@9-10.0.0.99:22-10.0.0.1:56802.service - OpenSSH per-connection server daemon (10.0.0.1:56802). May 14 00:01:09.190060 sshd[3545]: Accepted publickey for core from 10.0.0.1 port 56802 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:09.191916 sshd-session[3545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:09.197691 systemd-logind[1492]: New session 10 of user core. May 14 00:01:09.204324 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 00:01:09.282498 kubelet[2707]: E0514 00:01:09.282362 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:09.589282 sshd[3547]: Connection closed by 10.0.0.1 port 56802 May 14 00:01:09.589726 sshd-session[3545]: pam_unix(sshd:session): session closed for user core May 14 00:01:09.593451 systemd[1]: sshd@9-10.0.0.99:22-10.0.0.1:56802.service: Deactivated successfully. May 14 00:01:09.595528 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:01:09.596323 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. May 14 00:01:09.597298 systemd-logind[1492]: Removed session 10. May 14 00:01:10.148060 kubelet[2707]: E0514 00:01:10.147989 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s8nw4" podUID="fff7752d-bd43-4c8e-a187-4d071bd5cd0b" May 14 00:01:10.189974 systemd[1]: cri-containerd-c69c478077df1589e1033a2358e3f95fe5d58bb568db41f224acacb161d583ee.scope: Deactivated successfully. May 14 00:01:10.190447 systemd[1]: cri-containerd-c69c478077df1589e1033a2358e3f95fe5d58bb568db41f224acacb161d583ee.scope: Consumed 601ms CPU time, 161.9M memory peak, 20K read from disk, 154M written to disk. May 14 00:01:10.213835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c69c478077df1589e1033a2358e3f95fe5d58bb568db41f224acacb161d583ee-rootfs.mount: Deactivated successfully. May 14 00:01:10.222710 containerd[1514]: time="2025-05-14T00:01:10.220268054Z" level=info msg="shim disconnected" id=c69c478077df1589e1033a2358e3f95fe5d58bb568db41f224acacb161d583ee namespace=k8s.io May 14 00:01:10.222710 containerd[1514]: time="2025-05-14T00:01:10.220338097Z" level=warning msg="cleaning up after shim disconnected" id=c69c478077df1589e1033a2358e3f95fe5d58bb568db41f224acacb161d583ee namespace=k8s.io May 14 00:01:10.222710 containerd[1514]: time="2025-05-14T00:01:10.220368098Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 00:01:10.286710 kubelet[2707]: E0514 00:01:10.286662 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:10.287441 kubelet[2707]: I0514 00:01:10.287419 2707 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 00:01:10.288044 containerd[1514]: time="2025-05-14T00:01:10.288004029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 00:01:10.315712 kubelet[2707]: I0514 00:01:10.313914 2707 topology_manager.go:215] "Topology Admit Handler" podUID="55c8484c-9be6-4d0c-a1af-577996ceaea2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-s9vl9" May 14 00:01:10.315929 kubelet[2707]: I0514 00:01:10.315891 2707 topology_manager.go:215] "Topology Admit Handler" podUID="35de77c4-421f-48ed-ab25-a4fe1879067a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-l7hft" May 14 00:01:10.316880 kubelet[2707]: I0514 00:01:10.316838 2707 topology_manager.go:215] "Topology Admit Handler" podUID="2e76bb18-a688-4a74-8c25-969dbaf341ad" podNamespace="calico-apiserver" podName="calico-apiserver-897df95dc-k8qvz" May 14 00:01:10.321621 kubelet[2707]: I0514 00:01:10.321577 2707 topology_manager.go:215] "Topology Admit Handler" podUID="1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37" podNamespace="calico-apiserver" podName="calico-apiserver-897df95dc-b4gq4" May 14 00:01:10.322392 kubelet[2707]: I0514 00:01:10.322324 2707 topology_manager.go:215] "Topology Admit Handler" podUID="2df24035-39a4-4f27-b78b-89f954595966" podNamespace="calico-system" podName="calico-kube-controllers-54f4f89fbf-94z7j" May 14 00:01:10.331658 systemd[1]: Created slice kubepods-burstable-pod55c8484c_9be6_4d0c_a1af_577996ceaea2.slice - libcontainer container kubepods-burstable-pod55c8484c_9be6_4d0c_a1af_577996ceaea2.slice. May 14 00:01:10.339433 systemd[1]: Created slice kubepods-burstable-pod35de77c4_421f_48ed_ab25_a4fe1879067a.slice - libcontainer container kubepods-burstable-pod35de77c4_421f_48ed_ab25_a4fe1879067a.slice. May 14 00:01:10.347229 systemd[1]: Created slice kubepods-besteffort-pod2e76bb18_a688_4a74_8c25_969dbaf341ad.slice - libcontainer container kubepods-besteffort-pod2e76bb18_a688_4a74_8c25_969dbaf341ad.slice. May 14 00:01:10.353871 systemd[1]: Created slice kubepods-besteffort-pod1e7c405b_f2c2_4bf1_a1f9_17ef323d3d37.slice - libcontainer container kubepods-besteffort-pod1e7c405b_f2c2_4bf1_a1f9_17ef323d3d37.slice. May 14 00:01:10.355892 kubelet[2707]: I0514 00:01:10.355453 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh747\" (UniqueName: \"kubernetes.io/projected/1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37-kube-api-access-hh747\") pod \"calico-apiserver-897df95dc-b4gq4\" (UID: \"1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37\") " pod="calico-apiserver/calico-apiserver-897df95dc-b4gq4" May 14 00:01:10.355892 kubelet[2707]: I0514 00:01:10.355493 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55c8484c-9be6-4d0c-a1af-577996ceaea2-config-volume\") pod \"coredns-7db6d8ff4d-s9vl9\" (UID: \"55c8484c-9be6-4d0c-a1af-577996ceaea2\") " pod="kube-system/coredns-7db6d8ff4d-s9vl9" May 14 00:01:10.355892 kubelet[2707]: I0514 00:01:10.355519 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35de77c4-421f-48ed-ab25-a4fe1879067a-config-volume\") pod \"coredns-7db6d8ff4d-l7hft\" (UID: \"35de77c4-421f-48ed-ab25-a4fe1879067a\") " pod="kube-system/coredns-7db6d8ff4d-l7hft" May 14 00:01:10.355892 kubelet[2707]: I0514 00:01:10.355544 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37-calico-apiserver-certs\") pod \"calico-apiserver-897df95dc-b4gq4\" (UID: \"1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37\") " pod="calico-apiserver/calico-apiserver-897df95dc-b4gq4" May 14 00:01:10.355892 kubelet[2707]: I0514 00:01:10.355565 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96z5g\" (UniqueName: \"kubernetes.io/projected/55c8484c-9be6-4d0c-a1af-577996ceaea2-kube-api-access-96z5g\") pod \"coredns-7db6d8ff4d-s9vl9\" (UID: \"55c8484c-9be6-4d0c-a1af-577996ceaea2\") " pod="kube-system/coredns-7db6d8ff4d-s9vl9" May 14 00:01:10.356182 kubelet[2707]: I0514 00:01:10.355585 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk9vw\" (UniqueName: \"kubernetes.io/projected/2e76bb18-a688-4a74-8c25-969dbaf341ad-kube-api-access-sk9vw\") pod \"calico-apiserver-897df95dc-k8qvz\" (UID: \"2e76bb18-a688-4a74-8c25-969dbaf341ad\") " pod="calico-apiserver/calico-apiserver-897df95dc-k8qvz" May 14 00:01:10.356182 kubelet[2707]: I0514 00:01:10.355605 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2df24035-39a4-4f27-b78b-89f954595966-tigera-ca-bundle\") pod \"calico-kube-controllers-54f4f89fbf-94z7j\" (UID: \"2df24035-39a4-4f27-b78b-89f954595966\") " pod="calico-system/calico-kube-controllers-54f4f89fbf-94z7j" May 14 00:01:10.356182 kubelet[2707]: I0514 00:01:10.355629 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grtt7\" (UniqueName: \"kubernetes.io/projected/2df24035-39a4-4f27-b78b-89f954595966-kube-api-access-grtt7\") pod \"calico-kube-controllers-54f4f89fbf-94z7j\" (UID: \"2df24035-39a4-4f27-b78b-89f954595966\") " pod="calico-system/calico-kube-controllers-54f4f89fbf-94z7j" May 14 00:01:10.356182 kubelet[2707]: I0514 00:01:10.355655 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e76bb18-a688-4a74-8c25-969dbaf341ad-calico-apiserver-certs\") pod \"calico-apiserver-897df95dc-k8qvz\" (UID: \"2e76bb18-a688-4a74-8c25-969dbaf341ad\") " pod="calico-apiserver/calico-apiserver-897df95dc-k8qvz" May 14 00:01:10.356182 kubelet[2707]: I0514 00:01:10.355725 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbl4g\" (UniqueName: \"kubernetes.io/projected/35de77c4-421f-48ed-ab25-a4fe1879067a-kube-api-access-qbl4g\") pod \"coredns-7db6d8ff4d-l7hft\" (UID: \"35de77c4-421f-48ed-ab25-a4fe1879067a\") " pod="kube-system/coredns-7db6d8ff4d-l7hft" May 14 00:01:10.361163 systemd[1]: Created slice kubepods-besteffort-pod2df24035_39a4_4f27_b78b_89f954595966.slice - libcontainer container kubepods-besteffort-pod2df24035_39a4_4f27_b78b_89f954595966.slice. May 14 00:01:10.636503 kubelet[2707]: E0514 00:01:10.636444 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:10.637051 containerd[1514]: time="2025-05-14T00:01:10.637019715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s9vl9,Uid:55c8484c-9be6-4d0c-a1af-577996ceaea2,Namespace:kube-system,Attempt:0,}" May 14 00:01:10.642381 kubelet[2707]: E0514 00:01:10.642350 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:10.642717 containerd[1514]: time="2025-05-14T00:01:10.642671059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l7hft,Uid:35de77c4-421f-48ed-ab25-a4fe1879067a,Namespace:kube-system,Attempt:0,}" May 14 00:01:10.651132 containerd[1514]: time="2025-05-14T00:01:10.651066824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-k8qvz,Uid:2e76bb18-a688-4a74-8c25-969dbaf341ad,Namespace:calico-apiserver,Attempt:0,}" May 14 00:01:10.658274 containerd[1514]: time="2025-05-14T00:01:10.658217312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-b4gq4,Uid:1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37,Namespace:calico-apiserver,Attempt:0,}" May 14 00:01:10.664910 containerd[1514]: time="2025-05-14T00:01:10.664857571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f4f89fbf-94z7j,Uid:2df24035-39a4-4f27-b78b-89f954595966,Namespace:calico-system,Attempt:0,}" May 14 00:01:10.732331 containerd[1514]: time="2025-05-14T00:01:10.732271499Z" level=error msg="Failed to destroy network for sandbox \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:10.732739 containerd[1514]: time="2025-05-14T00:01:10.732712468Z" level=error msg="encountered an error cleaning up failed sandbox \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:10.732808 containerd[1514]: time="2025-05-14T00:01:10.732784774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s9vl9,Uid:55c8484c-9be6-4d0c-a1af-577996ceaea2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:10.733133 kubelet[2707]: E0514 00:01:10.733047 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:10.733133 kubelet[2707]: E0514 00:01:10.733123 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s9vl9" May 14 00:01:10.733133 kubelet[2707]: E0514 00:01:10.733152 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s9vl9" May 14 00:01:10.733358 kubelet[2707]: E0514 00:01:10.733210 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-s9vl9_kube-system(55c8484c-9be6-4d0c-a1af-577996ceaea2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-s9vl9_kube-system(55c8484c-9be6-4d0c-a1af-577996ceaea2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-s9vl9" podUID="55c8484c-9be6-4d0c-a1af-577996ceaea2" May 14 00:01:10.989222 containerd[1514]: time="2025-05-14T00:01:10.989067790Z" level=error msg="Failed to destroy network for sandbox \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:10.989616 containerd[1514]: time="2025-05-14T00:01:10.989588091Z" level=error msg="encountered an error cleaning up failed sandbox \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:10.989820 containerd[1514]: time="2025-05-14T00:01:10.989661419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l7hft,Uid:35de77c4-421f-48ed-ab25-a4fe1879067a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:10.990137 kubelet[2707]: E0514 00:01:10.990080 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:10.990193 kubelet[2707]: E0514 00:01:10.990165 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l7hft" May 14 00:01:10.990241 kubelet[2707]: E0514 00:01:10.990192 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l7hft" May 14 00:01:10.990294 kubelet[2707]: E0514 00:01:10.990259 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-l7hft_kube-system(35de77c4-421f-48ed-ab25-a4fe1879067a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-l7hft_kube-system(35de77c4-421f-48ed-ab25-a4fe1879067a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-l7hft" podUID="35de77c4-421f-48ed-ab25-a4fe1879067a" May 14 00:01:11.289963 kubelet[2707]: I0514 00:01:11.289825 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e" May 14 00:01:11.290815 kubelet[2707]: I0514 00:01:11.290778 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4" May 14 00:01:11.291741 containerd[1514]: time="2025-05-14T00:01:11.290594609Z" level=info msg="StopPodSandbox for \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\"" May 14 00:01:11.292156 containerd[1514]: time="2025-05-14T00:01:11.291305665Z" level=info msg="StopPodSandbox for \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\"" May 14 00:01:11.292156 containerd[1514]: time="2025-05-14T00:01:11.292031853Z" level=info msg="Ensure that sandbox 3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e in task-service has been cleanup successfully" May 14 00:01:11.292156 containerd[1514]: time="2025-05-14T00:01:11.292140754Z" level=info msg="Ensure that sandbox 5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4 in task-service has been cleanup successfully" May 14 00:01:11.294593 systemd[1]: run-netns-cni\x2db84fe3da\x2d0733\x2d070f\x2d1239\x2d536509f2b126.mount: Deactivated successfully. May 14 00:01:11.294800 systemd[1]: run-netns-cni\x2d063dafc1\x2d2ddb\x2d8e14\x2d8df3\x2dca18c4807df3.mount: Deactivated successfully. May 14 00:01:11.294954 containerd[1514]: time="2025-05-14T00:01:11.294792458Z" level=info msg="TearDown network for sandbox \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\" successfully" May 14 00:01:11.294954 containerd[1514]: time="2025-05-14T00:01:11.294809092Z" level=info msg="StopPodSandbox for \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\" returns successfully" May 14 00:01:11.295193 kubelet[2707]: E0514 00:01:11.295136 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:11.295858 containerd[1514]: time="2025-05-14T00:01:11.295422540Z" level=info msg="TearDown network for sandbox \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\" successfully" May 14 00:01:11.295858 containerd[1514]: time="2025-05-14T00:01:11.295441188Z" level=info msg="StopPodSandbox for \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\" returns successfully" May 14 00:01:11.295858 containerd[1514]: time="2025-05-14T00:01:11.295556102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l7hft,Uid:35de77c4-421f-48ed-ab25-a4fe1879067a,Namespace:kube-system,Attempt:1,}" May 14 00:01:11.295858 containerd[1514]: time="2025-05-14T00:01:11.295821442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s9vl9,Uid:55c8484c-9be6-4d0c-a1af-577996ceaea2,Namespace:kube-system,Attempt:1,}" May 14 00:01:11.296027 kubelet[2707]: E0514 00:01:11.295633 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:11.625535 containerd[1514]: time="2025-05-14T00:01:11.625464650Z" level=error msg="Failed to destroy network for sandbox \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.625991 containerd[1514]: time="2025-05-14T00:01:11.625953363Z" level=error msg="encountered an error cleaning up failed sandbox \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.626031 containerd[1514]: time="2025-05-14T00:01:11.626017584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-b4gq4,Uid:1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.626327 kubelet[2707]: E0514 00:01:11.626268 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.626420 kubelet[2707]: E0514 00:01:11.626353 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-b4gq4" May 14 00:01:11.626420 kubelet[2707]: E0514 00:01:11.626395 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-b4gq4" May 14 00:01:11.626501 kubelet[2707]: E0514 00:01:11.626447 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-897df95dc-b4gq4_calico-apiserver(1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-897df95dc-b4gq4_calico-apiserver(1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-897df95dc-b4gq4" podUID="1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37" May 14 00:01:11.753800 containerd[1514]: time="2025-05-14T00:01:11.753740683Z" level=error msg="Failed to destroy network for sandbox \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.754217 containerd[1514]: time="2025-05-14T00:01:11.754184356Z" level=error msg="encountered an error cleaning up failed sandbox \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.754279 containerd[1514]: time="2025-05-14T00:01:11.754251352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-k8qvz,Uid:2e76bb18-a688-4a74-8c25-969dbaf341ad,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.754585 kubelet[2707]: E0514 00:01:11.754528 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.754647 kubelet[2707]: E0514 00:01:11.754611 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-k8qvz" May 14 00:01:11.754694 kubelet[2707]: E0514 00:01:11.754641 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-k8qvz" May 14 00:01:11.754740 kubelet[2707]: E0514 00:01:11.754713 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-897df95dc-k8qvz_calico-apiserver(2e76bb18-a688-4a74-8c25-969dbaf341ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-897df95dc-k8qvz_calico-apiserver(2e76bb18-a688-4a74-8c25-969dbaf341ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-897df95dc-k8qvz" podUID="2e76bb18-a688-4a74-8c25-969dbaf341ad" May 14 00:01:11.918602 containerd[1514]: time="2025-05-14T00:01:11.918451086Z" level=error msg="Failed to destroy network for sandbox \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.919225 containerd[1514]: time="2025-05-14T00:01:11.919154747Z" level=error msg="encountered an error cleaning up failed sandbox \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.920061 containerd[1514]: time="2025-05-14T00:01:11.919248077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f4f89fbf-94z7j,Uid:2df24035-39a4-4f27-b78b-89f954595966,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.920870 kubelet[2707]: E0514 00:01:11.920382 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.920870 kubelet[2707]: E0514 00:01:11.920450 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54f4f89fbf-94z7j" May 14 00:01:11.920870 kubelet[2707]: E0514 00:01:11.920483 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54f4f89fbf-94z7j" May 14 00:01:11.921004 kubelet[2707]: E0514 00:01:11.920539 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54f4f89fbf-94z7j_calico-system(2df24035-39a4-4f27-b78b-89f954595966)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54f4f89fbf-94z7j_calico-system(2df24035-39a4-4f27-b78b-89f954595966)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54f4f89fbf-94z7j" podUID="2df24035-39a4-4f27-b78b-89f954595966" May 14 00:01:11.930622 containerd[1514]: time="2025-05-14T00:01:11.930571332Z" level=error msg="Failed to destroy network for sandbox \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.931055 containerd[1514]: time="2025-05-14T00:01:11.931015375Z" level=error msg="encountered an error cleaning up failed sandbox \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.931118 containerd[1514]: time="2025-05-14T00:01:11.931086250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l7hft,Uid:35de77c4-421f-48ed-ab25-a4fe1879067a,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.931438 kubelet[2707]: E0514 00:01:11.931363 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.931438 kubelet[2707]: E0514 00:01:11.931437 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l7hft" May 14 00:01:11.931657 kubelet[2707]: E0514 00:01:11.931461 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l7hft" May 14 00:01:11.931657 kubelet[2707]: E0514 00:01:11.931512 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-l7hft_kube-system(35de77c4-421f-48ed-ab25-a4fe1879067a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-l7hft_kube-system(35de77c4-421f-48ed-ab25-a4fe1879067a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-l7hft" podUID="35de77c4-421f-48ed-ab25-a4fe1879067a" May 14 00:01:11.935820 containerd[1514]: time="2025-05-14T00:01:11.935749446Z" level=error msg="Failed to destroy network for sandbox \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.936298 containerd[1514]: time="2025-05-14T00:01:11.936250014Z" level=error msg="encountered an error cleaning up failed sandbox \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.936372 containerd[1514]: time="2025-05-14T00:01:11.936325427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s9vl9,Uid:55c8484c-9be6-4d0c-a1af-577996ceaea2,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.936644 kubelet[2707]: E0514 00:01:11.936590 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:11.936743 kubelet[2707]: E0514 00:01:11.936665 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s9vl9" May 14 00:01:11.936743 kubelet[2707]: E0514 00:01:11.936707 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s9vl9" May 14 00:01:11.936818 kubelet[2707]: E0514 00:01:11.936757 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-s9vl9_kube-system(55c8484c-9be6-4d0c-a1af-577996ceaea2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-s9vl9_kube-system(55c8484c-9be6-4d0c-a1af-577996ceaea2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-s9vl9" podUID="55c8484c-9be6-4d0c-a1af-577996ceaea2" May 14 00:01:12.154064 systemd[1]: Created slice kubepods-besteffort-podfff7752d_bd43_4c8e_a187_4d071bd5cd0b.slice - libcontainer container kubepods-besteffort-podfff7752d_bd43_4c8e_a187_4d071bd5cd0b.slice. May 14 00:01:12.157032 containerd[1514]: time="2025-05-14T00:01:12.156978869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8nw4,Uid:fff7752d-bd43-4c8e-a187-4d071bd5cd0b,Namespace:calico-system,Attempt:0,}" May 14 00:01:12.221462 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd-shm.mount: Deactivated successfully. May 14 00:01:12.221611 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd-shm.mount: Deactivated successfully. May 14 00:01:12.244116 containerd[1514]: time="2025-05-14T00:01:12.244040268Z" level=error msg="Failed to destroy network for sandbox \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:12.247140 containerd[1514]: time="2025-05-14T00:01:12.247089162Z" level=error msg="encountered an error cleaning up failed sandbox \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:12.247126 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369-shm.mount: Deactivated successfully. May 14 00:01:12.251037 containerd[1514]: time="2025-05-14T00:01:12.250729527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8nw4,Uid:fff7752d-bd43-4c8e-a187-4d071bd5cd0b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:12.251164 kubelet[2707]: E0514 00:01:12.251094 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:12.251220 kubelet[2707]: E0514 00:01:12.251163 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s8nw4" May 14 00:01:12.251259 kubelet[2707]: E0514 00:01:12.251214 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s8nw4" May 14 00:01:12.251314 kubelet[2707]: E0514 00:01:12.251279 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s8nw4_calico-system(fff7752d-bd43-4c8e-a187-4d071bd5cd0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s8nw4_calico-system(fff7752d-bd43-4c8e-a187-4d071bd5cd0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s8nw4" podUID="fff7752d-bd43-4c8e-a187-4d071bd5cd0b" May 14 00:01:12.299938 kubelet[2707]: I0514 00:01:12.297703 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd" May 14 00:01:12.300619 containerd[1514]: time="2025-05-14T00:01:12.298719486Z" level=info msg="StopPodSandbox for \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\"" May 14 00:01:12.300619 containerd[1514]: time="2025-05-14T00:01:12.298972299Z" level=info msg="Ensure that sandbox 9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd in task-service has been cleanup successfully" May 14 00:01:12.300619 containerd[1514]: time="2025-05-14T00:01:12.299193970Z" level=info msg="TearDown network for sandbox \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\" successfully" May 14 00:01:12.300619 containerd[1514]: time="2025-05-14T00:01:12.299209591Z" level=info msg="StopPodSandbox for \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\" returns successfully" May 14 00:01:12.300619 containerd[1514]: time="2025-05-14T00:01:12.299614304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-k8qvz,Uid:2e76bb18-a688-4a74-8c25-969dbaf341ad,Namespace:calico-apiserver,Attempt:1,}" May 14 00:01:12.301076 kubelet[2707]: I0514 00:01:12.300149 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd" May 14 00:01:12.303178 systemd[1]: run-netns-cni\x2deb709a14\x2da399\x2da1d7\x2d40ec\x2d29196733bf86.mount: Deactivated successfully. May 14 00:01:12.305440 containerd[1514]: time="2025-05-14T00:01:12.305384916Z" level=info msg="StopPodSandbox for \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\"" May 14 00:01:12.305655 containerd[1514]: time="2025-05-14T00:01:12.305631017Z" level=info msg="Ensure that sandbox 5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd in task-service has been cleanup successfully" May 14 00:01:12.306305 kubelet[2707]: I0514 00:01:12.306271 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b" May 14 00:01:12.306943 containerd[1514]: time="2025-05-14T00:01:12.306869502Z" level=info msg="TearDown network for sandbox \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\" successfully" May 14 00:01:12.306943 containerd[1514]: time="2025-05-14T00:01:12.306897348Z" level=info msg="StopPodSandbox for \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\" returns successfully" May 14 00:01:12.307374 containerd[1514]: time="2025-05-14T00:01:12.307344197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-b4gq4,Uid:1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37,Namespace:calico-apiserver,Attempt:1,}" May 14 00:01:12.308108 containerd[1514]: time="2025-05-14T00:01:12.307708096Z" level=info msg="StopPodSandbox for \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\"" May 14 00:01:12.308108 containerd[1514]: time="2025-05-14T00:01:12.307895045Z" level=info msg="Ensure that sandbox 480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b in task-service has been cleanup successfully" May 14 00:01:12.308179 containerd[1514]: time="2025-05-14T00:01:12.308139742Z" level=info msg="TearDown network for sandbox \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\" successfully" May 14 00:01:12.308179 containerd[1514]: time="2025-05-14T00:01:12.308154963Z" level=info msg="StopPodSandbox for \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\" returns successfully" May 14 00:01:12.308657 containerd[1514]: time="2025-05-14T00:01:12.308619588Z" level=info msg="StopPodSandbox for \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\"" May 14 00:01:12.308657 containerd[1514]: time="2025-05-14T00:01:12.308727767Z" level=info msg="TearDown network for sandbox \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\" successfully" May 14 00:01:12.308657 containerd[1514]: time="2025-05-14T00:01:12.308742327Z" level=info msg="StopPodSandbox for \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\" returns successfully" May 14 00:01:12.309098 kubelet[2707]: E0514 00:01:12.308953 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:12.309308 containerd[1514]: time="2025-05-14T00:01:12.309283266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l7hft,Uid:35de77c4-421f-48ed-ab25-a4fe1879067a,Namespace:kube-system,Attempt:2,}" May 14 00:01:12.309588 systemd[1]: run-netns-cni\x2d32ace5ca\x2dfcf7\x2d9472\x2d3672\x2db0eaca1893e5.mount: Deactivated successfully. May 14 00:01:12.310192 kubelet[2707]: I0514 00:01:12.310121 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e" May 14 00:01:12.310957 containerd[1514]: time="2025-05-14T00:01:12.310912495Z" level=info msg="StopPodSandbox for \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\"" May 14 00:01:12.314709 containerd[1514]: time="2025-05-14T00:01:12.311195120Z" level=info msg="Ensure that sandbox 6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e in task-service has been cleanup successfully" May 14 00:01:12.314709 containerd[1514]: time="2025-05-14T00:01:12.311438845Z" level=info msg="TearDown network for sandbox \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\" successfully" May 14 00:01:12.314709 containerd[1514]: time="2025-05-14T00:01:12.311465068Z" level=info msg="StopPodSandbox for \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\" returns successfully" May 14 00:01:12.314709 containerd[1514]: time="2025-05-14T00:01:12.311975375Z" level=info msg="StopPodSandbox for \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\"" May 14 00:01:12.314709 containerd[1514]: time="2025-05-14T00:01:12.312072852Z" level=info msg="TearDown network for sandbox \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\" successfully" May 14 00:01:12.314709 containerd[1514]: time="2025-05-14T00:01:12.312086079Z" level=info msg="StopPodSandbox for \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\" returns successfully" May 14 00:01:12.314709 containerd[1514]: time="2025-05-14T00:01:12.313115841Z" level=info msg="StopPodSandbox for \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\"" May 14 00:01:12.314709 containerd[1514]: time="2025-05-14T00:01:12.313331619Z" level=info msg="Ensure that sandbox 0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369 in task-service has been cleanup successfully" May 14 00:01:12.314709 containerd[1514]: time="2025-05-14T00:01:12.313638513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s9vl9,Uid:55c8484c-9be6-4d0c-a1af-577996ceaea2,Namespace:kube-system,Attempt:2,}" May 14 00:01:12.315058 kubelet[2707]: I0514 00:01:12.311751 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369" May 14 00:01:12.315058 kubelet[2707]: E0514 00:01:12.312632 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:12.315742 kubelet[2707]: I0514 00:01:12.315706 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d" May 14 00:01:12.315814 systemd[1]: run-netns-cni\x2d24226e92\x2d85e8\x2d2f40\x2d44b6\x2dc079cbb0934a.mount: Deactivated successfully. May 14 00:01:12.315953 systemd[1]: run-netns-cni\x2d1d3108a1\x2df79d\x2da636\x2de24b\x2d3e5906644c2b.mount: Deactivated successfully. May 14 00:01:12.317175 containerd[1514]: time="2025-05-14T00:01:12.317092750Z" level=info msg="TearDown network for sandbox \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\" successfully" May 14 00:01:12.317175 containerd[1514]: time="2025-05-14T00:01:12.317127901Z" level=info msg="StopPodSandbox for \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\" returns successfully" May 14 00:01:12.317319 containerd[1514]: time="2025-05-14T00:01:12.317205319Z" level=info msg="StopPodSandbox for \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\"" May 14 00:01:12.317501 containerd[1514]: time="2025-05-14T00:01:12.317464785Z" level=info msg="Ensure that sandbox fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d in task-service has been cleanup successfully" May 14 00:01:12.317719 containerd[1514]: time="2025-05-14T00:01:12.317689031Z" level=info msg="TearDown network for sandbox \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\" successfully" May 14 00:01:12.317719 containerd[1514]: time="2025-05-14T00:01:12.317711748Z" level=info msg="StopPodSandbox for \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\" returns successfully" May 14 00:01:12.318418 containerd[1514]: time="2025-05-14T00:01:12.318376338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f4f89fbf-94z7j,Uid:2df24035-39a4-4f27-b78b-89f954595966,Namespace:calico-system,Attempt:1,}" May 14 00:01:12.318563 containerd[1514]: time="2025-05-14T00:01:12.318537585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8nw4,Uid:fff7752d-bd43-4c8e-a187-4d071bd5cd0b,Namespace:calico-system,Attempt:1,}" May 14 00:01:13.216667 systemd[1]: run-netns-cni\x2d8c3250fb\x2de727\x2d4428\x2d2d33\x2d7cbf92a50881.mount: Deactivated successfully. May 14 00:01:13.216790 systemd[1]: run-netns-cni\x2d4f4ecbf2\x2df674\x2de145\x2de3d8\x2d732fdf15c7ce.mount: Deactivated successfully. May 14 00:01:14.393907 systemd[1]: Started sshd@10-10.0.0.99:22-10.0.0.1:56810.service - OpenSSH per-connection server daemon (10.0.0.1:56810). May 14 00:01:14.541865 sshd[3884]: Accepted publickey for core from 10.0.0.1 port 56810 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:14.544092 sshd-session[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:14.551920 systemd-logind[1492]: New session 11 of user core. May 14 00:01:14.559439 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 00:01:14.614754 containerd[1514]: time="2025-05-14T00:01:14.613033396Z" level=error msg="Failed to destroy network for sandbox \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.614754 containerd[1514]: time="2025-05-14T00:01:14.614517143Z" level=error msg="encountered an error cleaning up failed sandbox \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.615240 containerd[1514]: time="2025-05-14T00:01:14.615109373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-b4gq4,Uid:1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.615543 kubelet[2707]: E0514 00:01:14.615502 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.615906 kubelet[2707]: E0514 00:01:14.615594 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-b4gq4" May 14 00:01:14.615906 kubelet[2707]: E0514 00:01:14.615616 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-b4gq4" May 14 00:01:14.615906 kubelet[2707]: E0514 00:01:14.615772 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-897df95dc-b4gq4_calico-apiserver(1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-897df95dc-b4gq4_calico-apiserver(1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-897df95dc-b4gq4" podUID="1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37" May 14 00:01:14.622864 containerd[1514]: time="2025-05-14T00:01:14.622799996Z" level=error msg="Failed to destroy network for sandbox \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.623261 containerd[1514]: time="2025-05-14T00:01:14.623236009Z" level=error msg="encountered an error cleaning up failed sandbox \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.623332 containerd[1514]: time="2025-05-14T00:01:14.623305330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l7hft,Uid:35de77c4-421f-48ed-ab25-a4fe1879067a,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.623754 kubelet[2707]: E0514 00:01:14.623645 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.626278 kubelet[2707]: E0514 00:01:14.623767 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l7hft" May 14 00:01:14.626330 kubelet[2707]: E0514 00:01:14.626285 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l7hft" May 14 00:01:14.626395 kubelet[2707]: E0514 00:01:14.626363 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-l7hft_kube-system(35de77c4-421f-48ed-ab25-a4fe1879067a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-l7hft_kube-system(35de77c4-421f-48ed-ab25-a4fe1879067a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-l7hft" podUID="35de77c4-421f-48ed-ab25-a4fe1879067a" May 14 00:01:14.643222 containerd[1514]: time="2025-05-14T00:01:14.643037041Z" level=error msg="Failed to destroy network for sandbox \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.645344 containerd[1514]: time="2025-05-14T00:01:14.645268402Z" level=error msg="encountered an error cleaning up failed sandbox \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.646323 containerd[1514]: time="2025-05-14T00:01:14.646292166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s9vl9,Uid:55c8484c-9be6-4d0c-a1af-577996ceaea2,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.646896 kubelet[2707]: E0514 00:01:14.646862 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.646953 kubelet[2707]: E0514 00:01:14.646919 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s9vl9" May 14 00:01:14.646953 kubelet[2707]: E0514 00:01:14.646941 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s9vl9" May 14 00:01:14.647042 kubelet[2707]: E0514 00:01:14.647002 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-s9vl9_kube-system(55c8484c-9be6-4d0c-a1af-577996ceaea2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-s9vl9_kube-system(55c8484c-9be6-4d0c-a1af-577996ceaea2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-s9vl9" podUID="55c8484c-9be6-4d0c-a1af-577996ceaea2" May 14 00:01:14.661619 containerd[1514]: time="2025-05-14T00:01:14.660791775Z" level=error msg="Failed to destroy network for sandbox \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.662770 containerd[1514]: time="2025-05-14T00:01:14.662224248Z" level=error msg="encountered an error cleaning up failed sandbox \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.662770 containerd[1514]: time="2025-05-14T00:01:14.662295713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-k8qvz,Uid:2e76bb18-a688-4a74-8c25-969dbaf341ad,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.662926 kubelet[2707]: E0514 00:01:14.662544 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.662926 kubelet[2707]: E0514 00:01:14.662609 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-k8qvz" May 14 00:01:14.662926 kubelet[2707]: E0514 00:01:14.662630 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-k8qvz" May 14 00:01:14.663042 kubelet[2707]: E0514 00:01:14.662702 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-897df95dc-k8qvz_calico-apiserver(2e76bb18-a688-4a74-8c25-969dbaf341ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-897df95dc-k8qvz_calico-apiserver(2e76bb18-a688-4a74-8c25-969dbaf341ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-897df95dc-k8qvz" podUID="2e76bb18-a688-4a74-8c25-969dbaf341ad" May 14 00:01:14.676528 containerd[1514]: time="2025-05-14T00:01:14.676370231Z" level=error msg="Failed to destroy network for sandbox \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.677124 containerd[1514]: time="2025-05-14T00:01:14.677097414Z" level=error msg="encountered an error cleaning up failed sandbox \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.677339 containerd[1514]: time="2025-05-14T00:01:14.677230103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f4f89fbf-94z7j,Uid:2df24035-39a4-4f27-b78b-89f954595966,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.677502 kubelet[2707]: E0514 00:01:14.677460 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.677565 kubelet[2707]: E0514 00:01:14.677522 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54f4f89fbf-94z7j" May 14 00:01:14.677565 kubelet[2707]: E0514 00:01:14.677543 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54f4f89fbf-94z7j" May 14 00:01:14.677657 kubelet[2707]: E0514 00:01:14.677583 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54f4f89fbf-94z7j_calico-system(2df24035-39a4-4f27-b78b-89f954595966)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54f4f89fbf-94z7j_calico-system(2df24035-39a4-4f27-b78b-89f954595966)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54f4f89fbf-94z7j" podUID="2df24035-39a4-4f27-b78b-89f954595966" May 14 00:01:14.738778 containerd[1514]: time="2025-05-14T00:01:14.738701297Z" level=error msg="Failed to destroy network for sandbox \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.739427 containerd[1514]: time="2025-05-14T00:01:14.739388460Z" level=error msg="encountered an error cleaning up failed sandbox \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.739568 containerd[1514]: time="2025-05-14T00:01:14.739506098Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8nw4,Uid:fff7752d-bd43-4c8e-a187-4d071bd5cd0b,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.739840 kubelet[2707]: E0514 00:01:14.739789 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:14.739915 kubelet[2707]: E0514 00:01:14.739865 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s8nw4" May 14 00:01:14.739915 kubelet[2707]: E0514 00:01:14.739896 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s8nw4" May 14 00:01:14.739994 kubelet[2707]: E0514 00:01:14.739945 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s8nw4_calico-system(fff7752d-bd43-4c8e-a187-4d071bd5cd0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s8nw4_calico-system(fff7752d-bd43-4c8e-a187-4d071bd5cd0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s8nw4" podUID="fff7752d-bd43-4c8e-a187-4d071bd5cd0b" May 14 00:01:14.741286 sshd[4002]: Connection closed by 10.0.0.1 port 56810 May 14 00:01:14.741773 sshd-session[3884]: pam_unix(sshd:session): session closed for user core May 14 00:01:14.746367 systemd[1]: sshd@10-10.0.0.99:22-10.0.0.1:56810.service: Deactivated successfully. May 14 00:01:14.753140 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:01:14.754541 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. May 14 00:01:14.755551 systemd-logind[1492]: Removed session 11. May 14 00:01:15.322158 kubelet[2707]: I0514 00:01:15.322126 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834" May 14 00:01:15.323628 containerd[1514]: time="2025-05-14T00:01:15.323543051Z" level=info msg="StopPodSandbox for \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\"" May 14 00:01:15.323887 containerd[1514]: time="2025-05-14T00:01:15.323815893Z" level=info msg="Ensure that sandbox 5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834 in task-service has been cleanup successfully" May 14 00:01:15.324116 containerd[1514]: time="2025-05-14T00:01:15.324041780Z" level=info msg="TearDown network for sandbox \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\" successfully" May 14 00:01:15.324116 containerd[1514]: time="2025-05-14T00:01:15.324063154Z" level=info msg="StopPodSandbox for \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\" returns successfully" May 14 00:01:15.324438 containerd[1514]: time="2025-05-14T00:01:15.324415317Z" level=info msg="StopPodSandbox for \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\"" May 14 00:01:15.324506 containerd[1514]: time="2025-05-14T00:01:15.324492442Z" level=info msg="TearDown network for sandbox \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\" successfully" May 14 00:01:15.324535 containerd[1514]: time="2025-05-14T00:01:15.324503956Z" level=info msg="StopPodSandbox for \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\" returns successfully" May 14 00:01:15.324863 kubelet[2707]: I0514 00:01:15.324775 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33" May 14 00:01:15.324962 containerd[1514]: time="2025-05-14T00:01:15.324941401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-b4gq4,Uid:1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37,Namespace:calico-apiserver,Attempt:2,}" May 14 00:01:15.325193 containerd[1514]: time="2025-05-14T00:01:15.325174263Z" level=info msg="StopPodSandbox for \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\"" May 14 00:01:15.325329 containerd[1514]: time="2025-05-14T00:01:15.325313754Z" level=info msg="Ensure that sandbox 9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33 in task-service has been cleanup successfully" May 14 00:01:15.325512 containerd[1514]: time="2025-05-14T00:01:15.325496955Z" level=info msg="TearDown network for sandbox \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\" successfully" May 14 00:01:15.325512 containerd[1514]: time="2025-05-14T00:01:15.325509791Z" level=info msg="StopPodSandbox for \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\" returns successfully" May 14 00:01:15.325898 containerd[1514]: time="2025-05-14T00:01:15.325880772Z" level=info msg="StopPodSandbox for \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\"" May 14 00:01:15.325968 containerd[1514]: time="2025-05-14T00:01:15.325951034Z" level=info msg="TearDown network for sandbox \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\" successfully" May 14 00:01:15.325968 containerd[1514]: time="2025-05-14T00:01:15.325962578Z" level=info msg="StopPodSandbox for \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\" returns successfully" May 14 00:01:15.326152 containerd[1514]: time="2025-05-14T00:01:15.326135789Z" level=info msg="StopPodSandbox for \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\"" May 14 00:01:15.326216 containerd[1514]: time="2025-05-14T00:01:15.326203676Z" level=info msg="TearDown network for sandbox \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\" successfully" May 14 00:01:15.326238 containerd[1514]: time="2025-05-14T00:01:15.326214247Z" level=info msg="StopPodSandbox for \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\" returns successfully" May 14 00:01:15.326411 kubelet[2707]: E0514 00:01:15.326391 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:15.326580 containerd[1514]: time="2025-05-14T00:01:15.326550047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l7hft,Uid:35de77c4-421f-48ed-ab25-a4fe1879067a,Namespace:kube-system,Attempt:3,}" May 14 00:01:15.327172 kubelet[2707]: I0514 00:01:15.327130 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3" May 14 00:01:15.327471 containerd[1514]: time="2025-05-14T00:01:15.327450169Z" level=info msg="StopPodSandbox for \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\"" May 14 00:01:15.327593 containerd[1514]: time="2025-05-14T00:01:15.327575842Z" level=info msg="Ensure that sandbox 95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3 in task-service has been cleanup successfully" May 14 00:01:15.327919 containerd[1514]: time="2025-05-14T00:01:15.327727329Z" level=info msg="TearDown network for sandbox \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\" successfully" May 14 00:01:15.327919 containerd[1514]: time="2025-05-14T00:01:15.327740386Z" level=info msg="StopPodSandbox for \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\" returns successfully" May 14 00:01:15.328330 containerd[1514]: time="2025-05-14T00:01:15.328304968Z" level=info msg="StopPodSandbox for \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\"" May 14 00:01:15.328389 containerd[1514]: time="2025-05-14T00:01:15.328375210Z" level=info msg="TearDown network for sandbox \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\" successfully" May 14 00:01:15.328411 containerd[1514]: time="2025-05-14T00:01:15.328387004Z" level=info msg="StopPodSandbox for \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\" returns successfully" May 14 00:01:15.328648 kubelet[2707]: I0514 00:01:15.328612 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d" May 14 00:01:15.329060 containerd[1514]: time="2025-05-14T00:01:15.329041459Z" level=info msg="StopPodSandbox for \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\"" May 14 00:01:15.329220 containerd[1514]: time="2025-05-14T00:01:15.329203116Z" level=info msg="Ensure that sandbox a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d in task-service has been cleanup successfully" May 14 00:01:15.329349 containerd[1514]: time="2025-05-14T00:01:15.329332879Z" level=info msg="StopPodSandbox for \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\"" May 14 00:01:15.329411 containerd[1514]: time="2025-05-14T00:01:15.329395986Z" level=info msg="TearDown network for sandbox \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\" successfully" May 14 00:01:15.329411 containerd[1514]: time="2025-05-14T00:01:15.329407289Z" level=info msg="StopPodSandbox for \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\" returns successfully" May 14 00:01:15.329695 kubelet[2707]: E0514 00:01:15.329556 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:15.329772 containerd[1514]: time="2025-05-14T00:01:15.329581602Z" level=info msg="TearDown network for sandbox \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\" successfully" May 14 00:01:15.329772 containerd[1514]: time="2025-05-14T00:01:15.329592133Z" level=info msg="StopPodSandbox for \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\" returns successfully" May 14 00:01:15.329772 containerd[1514]: time="2025-05-14T00:01:15.329747558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s9vl9,Uid:55c8484c-9be6-4d0c-a1af-577996ceaea2,Namespace:kube-system,Attempt:3,}" May 14 00:01:15.330465 kubelet[2707]: I0514 00:01:15.330427 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1" May 14 00:01:15.330790 containerd[1514]: time="2025-05-14T00:01:15.330758343Z" level=info msg="StopPodSandbox for \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\"" May 14 00:01:15.330910 containerd[1514]: time="2025-05-14T00:01:15.330893266Z" level=info msg="Ensure that sandbox 3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1 in task-service has been cleanup successfully" May 14 00:01:15.331066 containerd[1514]: time="2025-05-14T00:01:15.331045315Z" level=info msg="TearDown network for sandbox \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\" successfully" May 14 00:01:15.331066 containerd[1514]: time="2025-05-14T00:01:15.331059824Z" level=info msg="StopPodSandbox for \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\" returns successfully" May 14 00:01:15.331241 containerd[1514]: time="2025-05-14T00:01:15.331214687Z" level=info msg="StopPodSandbox for \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\"" May 14 00:01:15.331290 containerd[1514]: time="2025-05-14T00:01:15.331280912Z" level=info msg="TearDown network for sandbox \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\" successfully" May 14 00:01:15.331327 containerd[1514]: time="2025-05-14T00:01:15.331288928Z" level=info msg="StopPodSandbox for \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\" returns successfully" May 14 00:01:15.331598 kubelet[2707]: I0514 00:01:15.331571 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705" May 14 00:01:15.331908 containerd[1514]: time="2025-05-14T00:01:15.331881106Z" level=info msg="StopPodSandbox for \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\"" May 14 00:01:15.332022 containerd[1514]: time="2025-05-14T00:01:15.332003583Z" level=info msg="Ensure that sandbox b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705 in task-service has been cleanup successfully" May 14 00:01:15.332145 containerd[1514]: time="2025-05-14T00:01:15.332123436Z" level=info msg="TearDown network for sandbox \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\" successfully" May 14 00:01:15.332145 containerd[1514]: time="2025-05-14T00:01:15.332139218Z" level=info msg="StopPodSandbox for \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\" returns successfully" May 14 00:01:15.332209 containerd[1514]: time="2025-05-14T00:01:15.332195262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f4f89fbf-94z7j,Uid:2df24035-39a4-4f27-b78b-89f954595966,Namespace:calico-system,Attempt:2,}" May 14 00:01:15.332505 containerd[1514]: time="2025-05-14T00:01:15.332479718Z" level=info msg="StopPodSandbox for \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\"" May 14 00:01:15.332566 containerd[1514]: time="2025-05-14T00:01:15.332549509Z" level=info msg="TearDown network for sandbox \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\" successfully" May 14 00:01:15.332566 containerd[1514]: time="2025-05-14T00:01:15.332560702Z" level=info msg="StopPodSandbox for \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\" returns successfully" May 14 00:01:15.332849 containerd[1514]: time="2025-05-14T00:01:15.332824235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-k8qvz,Uid:2e76bb18-a688-4a74-8c25-969dbaf341ad,Namespace:calico-apiserver,Attempt:2,}" May 14 00:01:15.479495 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1-shm.mount: Deactivated successfully. May 14 00:01:15.479632 systemd[1]: run-netns-cni\x2dd026c204\x2dca59\x2dcdd7\x2da4c2\x2df730208b8d2b.mount: Deactivated successfully. May 14 00:01:15.479729 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3-shm.mount: Deactivated successfully. May 14 00:01:15.479827 systemd[1]: run-netns-cni\x2d713db553\x2d3354\x2de0e9\x2dc0cd\x2d273deb97b8d4.mount: Deactivated successfully. May 14 00:01:15.479904 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33-shm.mount: Deactivated successfully. May 14 00:01:15.479981 systemd[1]: run-netns-cni\x2da5f73300\x2d80f9\x2d5118\x2d4ae5\x2d1a0b284ab141.mount: Deactivated successfully. May 14 00:01:15.480052 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705-shm.mount: Deactivated successfully. May 14 00:01:15.480135 systemd[1]: run-netns-cni\x2daa2676bd\x2d825a\x2d4fa6\x2d2dba\x2d5e296b61740c.mount: Deactivated successfully. May 14 00:01:15.480213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834-shm.mount: Deactivated successfully. May 14 00:01:15.738805 containerd[1514]: time="2025-05-14T00:01:15.738450953Z" level=info msg="StopPodSandbox for \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\"" May 14 00:01:15.738805 containerd[1514]: time="2025-05-14T00:01:15.738605887Z" level=info msg="TearDown network for sandbox \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\" successfully" May 14 00:01:15.738805 containerd[1514]: time="2025-05-14T00:01:15.738616398Z" level=info msg="StopPodSandbox for \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\" returns successfully" May 14 00:01:15.739938 containerd[1514]: time="2025-05-14T00:01:15.739889315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8nw4,Uid:fff7752d-bd43-4c8e-a187-4d071bd5cd0b,Namespace:calico-system,Attempt:2,}" May 14 00:01:15.933876 containerd[1514]: time="2025-05-14T00:01:15.933719949Z" level=error msg="Failed to destroy network for sandbox \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.934975 containerd[1514]: time="2025-05-14T00:01:15.934878143Z" level=error msg="encountered an error cleaning up failed sandbox \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.935223 containerd[1514]: time="2025-05-14T00:01:15.935097547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8nw4,Uid:fff7752d-bd43-4c8e-a187-4d071bd5cd0b,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.935785 kubelet[2707]: E0514 00:01:15.935738 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.936290 kubelet[2707]: E0514 00:01:15.936050 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s8nw4" May 14 00:01:15.936551 kubelet[2707]: E0514 00:01:15.936359 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s8nw4" May 14 00:01:15.936551 kubelet[2707]: E0514 00:01:15.936456 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s8nw4_calico-system(fff7752d-bd43-4c8e-a187-4d071bd5cd0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s8nw4_calico-system(fff7752d-bd43-4c8e-a187-4d071bd5cd0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s8nw4" podUID="fff7752d-bd43-4c8e-a187-4d071bd5cd0b" May 14 00:01:15.944217 containerd[1514]: time="2025-05-14T00:01:15.942232227Z" level=error msg="Failed to destroy network for sandbox \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.944872 containerd[1514]: time="2025-05-14T00:01:15.944797959Z" level=error msg="encountered an error cleaning up failed sandbox \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.946753 containerd[1514]: time="2025-05-14T00:01:15.946714328Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-b4gq4,Uid:1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.947050 kubelet[2707]: E0514 00:01:15.946975 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.947176 kubelet[2707]: E0514 00:01:15.947051 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-b4gq4" May 14 00:01:15.947176 kubelet[2707]: E0514 00:01:15.947077 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-b4gq4" May 14 00:01:15.947176 kubelet[2707]: E0514 00:01:15.947127 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-897df95dc-b4gq4_calico-apiserver(1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-897df95dc-b4gq4_calico-apiserver(1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-897df95dc-b4gq4" podUID="1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37" May 14 00:01:15.958490 containerd[1514]: time="2025-05-14T00:01:15.958404968Z" level=error msg="Failed to destroy network for sandbox \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.958937 containerd[1514]: time="2025-05-14T00:01:15.958899118Z" level=error msg="encountered an error cleaning up failed sandbox \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.960116 containerd[1514]: time="2025-05-14T00:01:15.960048794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-k8qvz,Uid:2e76bb18-a688-4a74-8c25-969dbaf341ad,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.960267 containerd[1514]: time="2025-05-14T00:01:15.959741152Z" level=error msg="Failed to destroy network for sandbox \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.960412 kubelet[2707]: E0514 00:01:15.960332 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.960506 kubelet[2707]: E0514 00:01:15.960413 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-k8qvz" May 14 00:01:15.960506 kubelet[2707]: E0514 00:01:15.960440 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-k8qvz" May 14 00:01:15.960506 kubelet[2707]: E0514 00:01:15.960490 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-897df95dc-k8qvz_calico-apiserver(2e76bb18-a688-4a74-8c25-969dbaf341ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-897df95dc-k8qvz_calico-apiserver(2e76bb18-a688-4a74-8c25-969dbaf341ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-897df95dc-k8qvz" podUID="2e76bb18-a688-4a74-8c25-969dbaf341ad" May 14 00:01:15.963016 containerd[1514]: time="2025-05-14T00:01:15.962969997Z" level=error msg="encountered an error cleaning up failed sandbox \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.963105 containerd[1514]: time="2025-05-14T00:01:15.963067884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l7hft,Uid:35de77c4-421f-48ed-ab25-a4fe1879067a,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.963499 kubelet[2707]: E0514 00:01:15.963292 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.963499 kubelet[2707]: E0514 00:01:15.963345 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l7hft" May 14 00:01:15.963499 kubelet[2707]: E0514 00:01:15.963373 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l7hft" May 14 00:01:15.963662 kubelet[2707]: E0514 00:01:15.963415 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-l7hft_kube-system(35de77c4-421f-48ed-ab25-a4fe1879067a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-l7hft_kube-system(35de77c4-421f-48ed-ab25-a4fe1879067a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-l7hft" podUID="35de77c4-421f-48ed-ab25-a4fe1879067a" May 14 00:01:15.970937 containerd[1514]: time="2025-05-14T00:01:15.970878793Z" level=error msg="Failed to destroy network for sandbox \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.972020 containerd[1514]: time="2025-05-14T00:01:15.971904399Z" level=error msg="encountered an error cleaning up failed sandbox \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.972020 containerd[1514]: time="2025-05-14T00:01:15.971985092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s9vl9,Uid:55c8484c-9be6-4d0c-a1af-577996ceaea2,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.972323 kubelet[2707]: E0514 00:01:15.972234 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.972323 kubelet[2707]: E0514 00:01:15.972314 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s9vl9" May 14 00:01:15.972431 kubelet[2707]: E0514 00:01:15.972340 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s9vl9" May 14 00:01:15.972431 kubelet[2707]: E0514 00:01:15.972397 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-s9vl9_kube-system(55c8484c-9be6-4d0c-a1af-577996ceaea2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-s9vl9_kube-system(55c8484c-9be6-4d0c-a1af-577996ceaea2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-s9vl9" podUID="55c8484c-9be6-4d0c-a1af-577996ceaea2" May 14 00:01:15.976415 containerd[1514]: time="2025-05-14T00:01:15.976350006Z" level=error msg="Failed to destroy network for sandbox \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.976837 containerd[1514]: time="2025-05-14T00:01:15.976805437Z" level=error msg="encountered an error cleaning up failed sandbox \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.976929 containerd[1514]: time="2025-05-14T00:01:15.976902113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f4f89fbf-94z7j,Uid:2df24035-39a4-4f27-b78b-89f954595966,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.977366 kubelet[2707]: E0514 00:01:15.977137 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:15.977366 kubelet[2707]: E0514 00:01:15.977220 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54f4f89fbf-94z7j" May 14 00:01:15.977366 kubelet[2707]: E0514 00:01:15.977247 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54f4f89fbf-94z7j" May 14 00:01:15.977602 kubelet[2707]: E0514 00:01:15.977308 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54f4f89fbf-94z7j_calico-system(2df24035-39a4-4f27-b78b-89f954595966)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54f4f89fbf-94z7j_calico-system(2df24035-39a4-4f27-b78b-89f954595966)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54f4f89fbf-94z7j" podUID="2df24035-39a4-4f27-b78b-89f954595966" May 14 00:01:16.345039 kubelet[2707]: I0514 00:01:16.345000 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc" May 14 00:01:16.345555 containerd[1514]: time="2025-05-14T00:01:16.345528706Z" level=info msg="StopPodSandbox for \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\"" May 14 00:01:16.345751 containerd[1514]: time="2025-05-14T00:01:16.345732558Z" level=info msg="Ensure that sandbox bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc in task-service has been cleanup successfully" May 14 00:01:16.346287 containerd[1514]: time="2025-05-14T00:01:16.346186966Z" level=info msg="TearDown network for sandbox \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\" successfully" May 14 00:01:16.346287 containerd[1514]: time="2025-05-14T00:01:16.346203390Z" level=info msg="StopPodSandbox for \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\" returns successfully" May 14 00:01:16.346540 containerd[1514]: time="2025-05-14T00:01:16.346509799Z" level=info msg="StopPodSandbox for \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\"" May 14 00:01:16.346603 containerd[1514]: time="2025-05-14T00:01:16.346586243Z" level=info msg="TearDown network for sandbox \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\" successfully" May 14 00:01:16.346636 containerd[1514]: time="2025-05-14T00:01:16.346602026Z" level=info msg="StopPodSandbox for \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\" returns successfully" May 14 00:01:16.346890 containerd[1514]: time="2025-05-14T00:01:16.346840939Z" level=info msg="StopPodSandbox for \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\"" May 14 00:01:16.346957 containerd[1514]: time="2025-05-14T00:01:16.346937334Z" level=info msg="TearDown network for sandbox \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\" successfully" May 14 00:01:16.346957 containerd[1514]: time="2025-05-14T00:01:16.346949578Z" level=info msg="StopPodSandbox for \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\" returns successfully" May 14 00:01:16.347534 containerd[1514]: time="2025-05-14T00:01:16.347505462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8nw4,Uid:fff7752d-bd43-4c8e-a187-4d071bd5cd0b,Namespace:calico-system,Attempt:3,}" May 14 00:01:16.348135 kubelet[2707]: I0514 00:01:16.348107 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957" May 14 00:01:16.348611 containerd[1514]: time="2025-05-14T00:01:16.348460663Z" level=info msg="StopPodSandbox for \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\"" May 14 00:01:16.348642 containerd[1514]: time="2025-05-14T00:01:16.348611267Z" level=info msg="Ensure that sandbox 5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957 in task-service has been cleanup successfully" May 14 00:01:16.349000 containerd[1514]: time="2025-05-14T00:01:16.348896043Z" level=info msg="TearDown network for sandbox \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\" successfully" May 14 00:01:16.349000 containerd[1514]: time="2025-05-14T00:01:16.348922206Z" level=info msg="StopPodSandbox for \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\" returns successfully" May 14 00:01:16.349800 containerd[1514]: time="2025-05-14T00:01:16.349765611Z" level=info msg="StopPodSandbox for \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\"" May 14 00:01:16.349891 containerd[1514]: time="2025-05-14T00:01:16.349874532Z" level=info msg="TearDown network for sandbox \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\" successfully" May 14 00:01:16.349925 containerd[1514]: time="2025-05-14T00:01:16.349888820Z" level=info msg="StopPodSandbox for \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\" returns successfully" May 14 00:01:16.350271 containerd[1514]: time="2025-05-14T00:01:16.350250732Z" level=info msg="StopPodSandbox for \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\"" May 14 00:01:16.350351 containerd[1514]: time="2025-05-14T00:01:16.350333019Z" level=info msg="TearDown network for sandbox \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\" successfully" May 14 00:01:16.350351 containerd[1514]: time="2025-05-14T00:01:16.350345052Z" level=info msg="StopPodSandbox for \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\" returns successfully" May 14 00:01:16.351060 containerd[1514]: time="2025-05-14T00:01:16.351037773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f4f89fbf-94z7j,Uid:2df24035-39a4-4f27-b78b-89f954595966,Namespace:calico-system,Attempt:3,}" May 14 00:01:16.351287 kubelet[2707]: I0514 00:01:16.351272 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2" May 14 00:01:16.352076 containerd[1514]: time="2025-05-14T00:01:16.351721786Z" level=info msg="StopPodSandbox for \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\"" May 14 00:01:16.352076 containerd[1514]: time="2025-05-14T00:01:16.351943625Z" level=info msg="Ensure that sandbox 9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2 in task-service has been cleanup successfully" May 14 00:01:16.352262 containerd[1514]: time="2025-05-14T00:01:16.352218971Z" level=info msg="TearDown network for sandbox \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\" successfully" May 14 00:01:16.352337 containerd[1514]: time="2025-05-14T00:01:16.352325466Z" level=info msg="StopPodSandbox for \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\" returns successfully" May 14 00:01:16.352705 containerd[1514]: time="2025-05-14T00:01:16.352629932Z" level=info msg="StopPodSandbox for \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\"" May 14 00:01:16.353071 containerd[1514]: time="2025-05-14T00:01:16.353049691Z" level=info msg="TearDown network for sandbox \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\" successfully" May 14 00:01:16.353071 containerd[1514]: time="2025-05-14T00:01:16.353067526Z" level=info msg="StopPodSandbox for \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\" returns successfully" May 14 00:01:16.353250 kubelet[2707]: I0514 00:01:16.353226 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d" May 14 00:01:16.353404 containerd[1514]: time="2025-05-14T00:01:16.353374727Z" level=info msg="StopPodSandbox for \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\"" May 14 00:01:16.353493 containerd[1514]: time="2025-05-14T00:01:16.353473747Z" level=info msg="TearDown network for sandbox \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\" successfully" May 14 00:01:16.353528 containerd[1514]: time="2025-05-14T00:01:16.353490592Z" level=info msg="StopPodSandbox for \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\" returns successfully" May 14 00:01:16.353928 containerd[1514]: time="2025-05-14T00:01:16.353885289Z" level=info msg="StopPodSandbox for \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\"" May 14 00:01:16.354125 containerd[1514]: time="2025-05-14T00:01:16.354100856Z" level=info msg="Ensure that sandbox 537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d in task-service has been cleanup successfully" May 14 00:01:16.354350 containerd[1514]: time="2025-05-14T00:01:16.354331071Z" level=info msg="StopPodSandbox for \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\"" May 14 00:01:16.354493 containerd[1514]: time="2025-05-14T00:01:16.354479010Z" level=info msg="TearDown network for sandbox \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\" successfully" May 14 00:01:16.354600 containerd[1514]: time="2025-05-14T00:01:16.354544783Z" level=info msg="StopPodSandbox for \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\" returns successfully" May 14 00:01:16.354600 containerd[1514]: time="2025-05-14T00:01:16.354312753Z" level=info msg="TearDown network for sandbox \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\" successfully" May 14 00:01:16.354600 containerd[1514]: time="2025-05-14T00:01:16.354582990Z" level=info msg="StopPodSandbox for \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\" returns successfully" May 14 00:01:16.354970 containerd[1514]: time="2025-05-14T00:01:16.354950924Z" level=info msg="StopPodSandbox for \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\"" May 14 00:01:16.355005 kubelet[2707]: E0514 00:01:16.354964 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:16.355044 containerd[1514]: time="2025-05-14T00:01:16.355025294Z" level=info msg="TearDown network for sandbox \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\" successfully" May 14 00:01:16.355044 containerd[1514]: time="2025-05-14T00:01:16.355034002Z" level=info msg="StopPodSandbox for \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\" returns successfully" May 14 00:01:16.355241 kubelet[2707]: I0514 00:01:16.355225 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1" May 14 00:01:16.355659 containerd[1514]: time="2025-05-14T00:01:16.355633164Z" level=info msg="StopPodSandbox for \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\"" May 14 00:01:16.355760 containerd[1514]: time="2025-05-14T00:01:16.355717123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l7hft,Uid:35de77c4-421f-48ed-ab25-a4fe1879067a,Namespace:kube-system,Attempt:4,}" May 14 00:01:16.355850 containerd[1514]: time="2025-05-14T00:01:16.355828939Z" level=info msg="Ensure that sandbox 7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1 in task-service has been cleanup successfully" May 14 00:01:16.356069 containerd[1514]: time="2025-05-14T00:01:16.356044554Z" level=info msg="TearDown network for sandbox \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\" successfully" May 14 00:01:16.356069 containerd[1514]: time="2025-05-14T00:01:16.356066209Z" level=info msg="StopPodSandbox for \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\" returns successfully" May 14 00:01:16.356138 containerd[1514]: time="2025-05-14T00:01:16.355638754Z" level=info msg="StopPodSandbox for \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\"" May 14 00:01:16.356566 containerd[1514]: time="2025-05-14T00:01:16.356542031Z" level=info msg="StopPodSandbox for \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\"" May 14 00:01:16.356622 containerd[1514]: time="2025-05-14T00:01:16.356614347Z" level=info msg="TearDown network for sandbox \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\" successfully" May 14 00:01:16.356649 containerd[1514]: time="2025-05-14T00:01:16.356623155Z" level=info msg="StopPodSandbox for \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\" returns successfully" May 14 00:01:16.356803 containerd[1514]: time="2025-05-14T00:01:16.356775753Z" level=info msg="TearDown network for sandbox \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\" successfully" May 14 00:01:16.356803 containerd[1514]: time="2025-05-14T00:01:16.356790774Z" level=info msg="StopPodSandbox for \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\" returns successfully" May 14 00:01:16.360654 containerd[1514]: time="2025-05-14T00:01:16.359986825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-k8qvz,Uid:2e76bb18-a688-4a74-8c25-969dbaf341ad,Namespace:calico-apiserver,Attempt:3,}" May 14 00:01:16.360654 containerd[1514]: time="2025-05-14T00:01:16.360234536Z" level=info msg="StopPodSandbox for \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\"" May 14 00:01:16.360654 containerd[1514]: time="2025-05-14T00:01:16.360322073Z" level=info msg="TearDown network for sandbox \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\" successfully" May 14 00:01:16.360654 containerd[1514]: time="2025-05-14T00:01:16.360333716Z" level=info msg="StopPodSandbox for \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\" returns successfully" May 14 00:01:16.361280 containerd[1514]: time="2025-05-14T00:01:16.361243475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-b4gq4,Uid:1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37,Namespace:calico-apiserver,Attempt:3,}" May 14 00:01:16.362329 kubelet[2707]: I0514 00:01:16.361754 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1" May 14 00:01:16.362471 containerd[1514]: time="2025-05-14T00:01:16.362447830Z" level=info msg="StopPodSandbox for \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\"" May 14 00:01:16.362643 containerd[1514]: time="2025-05-14T00:01:16.362612182Z" level=info msg="Ensure that sandbox 350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1 in task-service has been cleanup successfully" May 14 00:01:16.362875 containerd[1514]: time="2025-05-14T00:01:16.362854893Z" level=info msg="TearDown network for sandbox \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\" successfully" May 14 00:01:16.362875 containerd[1514]: time="2025-05-14T00:01:16.362872198Z" level=info msg="StopPodSandbox for \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\" returns successfully" May 14 00:01:16.363205 containerd[1514]: time="2025-05-14T00:01:16.363182686Z" level=info msg="StopPodSandbox for \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\"" May 14 00:01:16.363278 containerd[1514]: time="2025-05-14T00:01:16.363261094Z" level=info msg="TearDown network for sandbox \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\" successfully" May 14 00:01:16.363278 containerd[1514]: time="2025-05-14T00:01:16.363275323Z" level=info msg="StopPodSandbox for \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\" returns successfully" May 14 00:01:16.364288 containerd[1514]: time="2025-05-14T00:01:16.363622976Z" level=info msg="StopPodSandbox for \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\"" May 14 00:01:16.364288 containerd[1514]: time="2025-05-14T00:01:16.363704160Z" level=info msg="TearDown network for sandbox \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\" successfully" May 14 00:01:16.364288 containerd[1514]: time="2025-05-14T00:01:16.363712877Z" level=info msg="StopPodSandbox for \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\" returns successfully" May 14 00:01:16.364288 containerd[1514]: time="2025-05-14T00:01:16.364012123Z" level=info msg="StopPodSandbox for \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\"" May 14 00:01:16.364288 containerd[1514]: time="2025-05-14T00:01:16.364075962Z" level=info msg="TearDown network for sandbox \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\" successfully" May 14 00:01:16.364288 containerd[1514]: time="2025-05-14T00:01:16.364083637Z" level=info msg="StopPodSandbox for \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\" returns successfully" May 14 00:01:16.364459 kubelet[2707]: E0514 00:01:16.364220 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:16.365505 containerd[1514]: time="2025-05-14T00:01:16.365472254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s9vl9,Uid:55c8484c-9be6-4d0c-a1af-577996ceaea2,Namespace:kube-system,Attempt:4,}" May 14 00:01:16.479259 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2-shm.mount: Deactivated successfully. May 14 00:01:16.479396 systemd[1]: run-netns-cni\x2d2875d0a1\x2da7f3\x2daf17\x2dab88\x2dc6d75896c91a.mount: Deactivated successfully. May 14 00:01:16.479475 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1-shm.mount: Deactivated successfully. May 14 00:01:16.479562 systemd[1]: run-netns-cni\x2da9c863be\x2d2fc7\x2dc963\x2d1e68\x2dd6e93d590dd5.mount: Deactivated successfully. May 14 00:01:16.479641 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc-shm.mount: Deactivated successfully. May 14 00:01:17.088764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682469666.mount: Deactivated successfully. May 14 00:01:19.688815 kubelet[2707]: I0514 00:01:19.688752 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:19.689418 kubelet[2707]: E0514 00:01:19.689404 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:19.757161 systemd[1]: Started sshd@11-10.0.0.99:22-10.0.0.1:44280.service - OpenSSH per-connection server daemon (10.0.0.1:44280). May 14 00:01:20.369722 kubelet[2707]: E0514 00:01:20.369668 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:20.705065 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 44280 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:20.707472 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:20.712483 systemd-logind[1492]: New session 12 of user core. May 14 00:01:20.719822 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 00:01:21.816396 sshd[4361]: Connection closed by 10.0.0.1 port 44280 May 14 00:01:21.817928 sshd-session[4359]: pam_unix(sshd:session): session closed for user core May 14 00:01:21.831161 systemd[1]: sshd@11-10.0.0.99:22-10.0.0.1:44280.service: Deactivated successfully. May 14 00:01:21.833258 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:01:21.834191 systemd-logind[1492]: Session 12 logged out. Waiting for processes to exit. May 14 00:01:21.845143 systemd[1]: Started sshd@12-10.0.0.99:22-10.0.0.1:44286.service - OpenSSH per-connection server daemon (10.0.0.1:44286). May 14 00:01:21.847090 systemd-logind[1492]: Removed session 12. May 14 00:01:21.879871 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 44286 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:21.882251 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:21.889398 systemd-logind[1492]: New session 13 of user core. May 14 00:01:21.897881 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 00:01:22.241968 sshd[4380]: Connection closed by 10.0.0.1 port 44286 May 14 00:01:22.242424 sshd-session[4377]: pam_unix(sshd:session): session closed for user core May 14 00:01:22.255852 systemd[1]: sshd@12-10.0.0.99:22-10.0.0.1:44286.service: Deactivated successfully. May 14 00:01:22.258813 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:01:22.260878 systemd-logind[1492]: Session 13 logged out. Waiting for processes to exit. May 14 00:01:22.267290 systemd[1]: Started sshd@13-10.0.0.99:22-10.0.0.1:44300.service - OpenSSH per-connection server daemon (10.0.0.1:44300). May 14 00:01:22.268964 systemd-logind[1492]: Removed session 13. May 14 00:01:22.355496 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 44300 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:22.357414 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:22.362715 systemd-logind[1492]: New session 14 of user core. May 14 00:01:22.368887 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 00:01:22.848737 sshd[4393]: Connection closed by 10.0.0.1 port 44300 May 14 00:01:22.850887 sshd-session[4390]: pam_unix(sshd:session): session closed for user core May 14 00:01:22.859197 systemd[1]: sshd@13-10.0.0.99:22-10.0.0.1:44300.service: Deactivated successfully. May 14 00:01:22.863740 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:01:22.869602 systemd-logind[1492]: Session 14 logged out. Waiting for processes to exit. May 14 00:01:22.873441 systemd-logind[1492]: Removed session 14. May 14 00:01:23.516461 containerd[1514]: time="2025-05-14T00:01:23.514209067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:23.569862 containerd[1514]: time="2025-05-14T00:01:23.566631642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 14 00:01:23.665373 containerd[1514]: time="2025-05-14T00:01:23.656117726Z" level=error msg="Failed to destroy network for sandbox \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.663925 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22-shm.mount: Deactivated successfully. May 14 00:01:23.666398 containerd[1514]: time="2025-05-14T00:01:23.666231210Z" level=error msg="encountered an error cleaning up failed sandbox \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.666398 containerd[1514]: time="2025-05-14T00:01:23.666339608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8nw4,Uid:fff7752d-bd43-4c8e-a187-4d071bd5cd0b,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.671419 kubelet[2707]: E0514 00:01:23.669609 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.672437 kubelet[2707]: E0514 00:01:23.671974 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s8nw4" May 14 00:01:23.672437 kubelet[2707]: E0514 00:01:23.672021 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s8nw4" May 14 00:01:23.672437 kubelet[2707]: E0514 00:01:23.672086 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s8nw4_calico-system(fff7752d-bd43-4c8e-a187-4d071bd5cd0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s8nw4_calico-system(fff7752d-bd43-4c8e-a187-4d071bd5cd0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s8nw4" podUID="fff7752d-bd43-4c8e-a187-4d071bd5cd0b" May 14 00:01:23.706589 containerd[1514]: time="2025-05-14T00:01:23.705622036Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:23.725368 containerd[1514]: time="2025-05-14T00:01:23.725318457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:23.726520 containerd[1514]: time="2025-05-14T00:01:23.726411172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 13.438201919s" May 14 00:01:23.726593 containerd[1514]: time="2025-05-14T00:01:23.726526585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 14 00:01:23.764490 containerd[1514]: time="2025-05-14T00:01:23.764408701Z" level=info msg="CreateContainer within sandbox \"bc4537e4ad0d97cc297accc4acc5b39ae8043689d3e15efd0ef76fc189c36e51\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 00:01:23.865789 containerd[1514]: time="2025-05-14T00:01:23.865322279Z" level=error msg="Failed to destroy network for sandbox \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.874106 containerd[1514]: time="2025-05-14T00:01:23.874038655Z" level=error msg="encountered an error cleaning up failed sandbox \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.875108 containerd[1514]: time="2025-05-14T00:01:23.875065649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f4f89fbf-94z7j,Uid:2df24035-39a4-4f27-b78b-89f954595966,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.878712 kubelet[2707]: E0514 00:01:23.878112 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.878712 kubelet[2707]: E0514 00:01:23.878219 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54f4f89fbf-94z7j" May 14 00:01:23.878712 kubelet[2707]: E0514 00:01:23.878247 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54f4f89fbf-94z7j" May 14 00:01:23.878959 kubelet[2707]: E0514 00:01:23.878335 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54f4f89fbf-94z7j_calico-system(2df24035-39a4-4f27-b78b-89f954595966)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54f4f89fbf-94z7j_calico-system(2df24035-39a4-4f27-b78b-89f954595966)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54f4f89fbf-94z7j" podUID="2df24035-39a4-4f27-b78b-89f954595966" May 14 00:01:23.888059 containerd[1514]: time="2025-05-14T00:01:23.887996513Z" level=error msg="Failed to destroy network for sandbox \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.888923 containerd[1514]: time="2025-05-14T00:01:23.888894498Z" level=error msg="encountered an error cleaning up failed sandbox \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.889118 containerd[1514]: time="2025-05-14T00:01:23.889088708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-b4gq4,Uid:1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.889713 kubelet[2707]: E0514 00:01:23.889657 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.889923 kubelet[2707]: E0514 00:01:23.889897 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-b4gq4" May 14 00:01:23.890040 kubelet[2707]: E0514 00:01:23.890021 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-b4gq4" May 14 00:01:23.890315 kubelet[2707]: E0514 00:01:23.890206 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-897df95dc-b4gq4_calico-apiserver(1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-897df95dc-b4gq4_calico-apiserver(1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-897df95dc-b4gq4" podUID="1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37" May 14 00:01:23.912740 containerd[1514]: time="2025-05-14T00:01:23.912520655Z" level=error msg="Failed to destroy network for sandbox \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.924065 containerd[1514]: time="2025-05-14T00:01:23.923601964Z" level=error msg="encountered an error cleaning up failed sandbox \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.924065 containerd[1514]: time="2025-05-14T00:01:23.923751434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l7hft,Uid:35de77c4-421f-48ed-ab25-a4fe1879067a,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.926055 kubelet[2707]: E0514 00:01:23.924357 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.926055 kubelet[2707]: E0514 00:01:23.924610 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l7hft" May 14 00:01:23.926055 kubelet[2707]: E0514 00:01:23.924641 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-l7hft" May 14 00:01:23.926286 kubelet[2707]: E0514 00:01:23.924752 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-l7hft_kube-system(35de77c4-421f-48ed-ab25-a4fe1879067a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-l7hft_kube-system(35de77c4-421f-48ed-ab25-a4fe1879067a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-l7hft" podUID="35de77c4-421f-48ed-ab25-a4fe1879067a" May 14 00:01:23.928863 containerd[1514]: time="2025-05-14T00:01:23.928777574Z" level=error msg="Failed to destroy network for sandbox \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.929453 containerd[1514]: time="2025-05-14T00:01:23.929395797Z" level=error msg="encountered an error cleaning up failed sandbox \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.929527 containerd[1514]: time="2025-05-14T00:01:23.929478232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s9vl9,Uid:55c8484c-9be6-4d0c-a1af-577996ceaea2,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.929895 kubelet[2707]: E0514 00:01:23.929820 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.929964 kubelet[2707]: E0514 00:01:23.929896 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s9vl9" May 14 00:01:23.929964 kubelet[2707]: E0514 00:01:23.929934 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-s9vl9" May 14 00:01:23.930046 kubelet[2707]: E0514 00:01:23.929988 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-s9vl9_kube-system(55c8484c-9be6-4d0c-a1af-577996ceaea2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-s9vl9_kube-system(55c8484c-9be6-4d0c-a1af-577996ceaea2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-s9vl9" podUID="55c8484c-9be6-4d0c-a1af-577996ceaea2" May 14 00:01:23.948745 containerd[1514]: time="2025-05-14T00:01:23.948608495Z" level=error msg="Failed to destroy network for sandbox \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.952417 containerd[1514]: time="2025-05-14T00:01:23.952314833Z" level=error msg="encountered an error cleaning up failed sandbox \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.952580 containerd[1514]: time="2025-05-14T00:01:23.952433682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-k8qvz,Uid:2e76bb18-a688-4a74-8c25-969dbaf341ad,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.952855 kubelet[2707]: E0514 00:01:23.952780 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:01:23.952928 kubelet[2707]: E0514 00:01:23.952872 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-k8qvz" May 14 00:01:23.952928 kubelet[2707]: E0514 00:01:23.952901 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-897df95dc-k8qvz" May 14 00:01:23.952994 kubelet[2707]: E0514 00:01:23.952961 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-897df95dc-k8qvz_calico-apiserver(2e76bb18-a688-4a74-8c25-969dbaf341ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-897df95dc-k8qvz_calico-apiserver(2e76bb18-a688-4a74-8c25-969dbaf341ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-897df95dc-k8qvz" podUID="2e76bb18-a688-4a74-8c25-969dbaf341ad" May 14 00:01:23.998450 containerd[1514]: time="2025-05-14T00:01:23.998376535Z" level=info msg="CreateContainer within sandbox \"bc4537e4ad0d97cc297accc4acc5b39ae8043689d3e15efd0ef76fc189c36e51\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0fc87e7c9fa8648042354c84330894971929298fbdf6ff7e9e990539e55c1235\"" May 14 00:01:23.999888 containerd[1514]: time="2025-05-14T00:01:23.998958855Z" level=info msg="StartContainer for \"0fc87e7c9fa8648042354c84330894971929298fbdf6ff7e9e990539e55c1235\"" May 14 00:01:24.158709 systemd[1]: Started cri-containerd-0fc87e7c9fa8648042354c84330894971929298fbdf6ff7e9e990539e55c1235.scope - libcontainer container 0fc87e7c9fa8648042354c84330894971929298fbdf6ff7e9e990539e55c1235. May 14 00:01:24.182165 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60-shm.mount: Deactivated successfully. May 14 00:01:24.182552 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a-shm.mount: Deactivated successfully. May 14 00:01:24.182786 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d-shm.mount: Deactivated successfully. May 14 00:01:24.183052 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7-shm.mount: Deactivated successfully. May 14 00:01:24.183251 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f-shm.mount: Deactivated successfully. May 14 00:01:24.318798 containerd[1514]: time="2025-05-14T00:01:24.318702700Z" level=info msg="StartContainer for \"0fc87e7c9fa8648042354c84330894971929298fbdf6ff7e9e990539e55c1235\" returns successfully" May 14 00:01:24.417790 kubelet[2707]: E0514 00:01:24.416551 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:24.433341 kubelet[2707]: I0514 00:01:24.433303 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d" May 14 00:01:24.434333 containerd[1514]: time="2025-05-14T00:01:24.434278019Z" level=info msg="StopPodSandbox for \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\"" May 14 00:01:24.440257 kubelet[2707]: I0514 00:01:24.440219 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a" May 14 00:01:24.447764 containerd[1514]: time="2025-05-14T00:01:24.441283016Z" level=info msg="StopPodSandbox for \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\"" May 14 00:01:24.447764 containerd[1514]: time="2025-05-14T00:01:24.441620383Z" level=info msg="Ensure that sandbox 204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a in task-service has been cleanup successfully" May 14 00:01:24.447764 containerd[1514]: time="2025-05-14T00:01:24.441964725Z" level=info msg="Ensure that sandbox 6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d in task-service has been cleanup successfully" May 14 00:01:24.458325 systemd[1]: run-netns-cni\x2d9d7f9239\x2d92ff\x2d2bad\x2d5548\x2d256390d1ad65.mount: Deactivated successfully. May 14 00:01:24.459138 systemd[1]: run-netns-cni\x2d7bca2def\x2d1be4\x2dbcee\x2dcbac\x2daaa550aa19c4.mount: Deactivated successfully. May 14 00:01:24.465263 containerd[1514]: time="2025-05-14T00:01:24.463266914Z" level=info msg="TearDown network for sandbox \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\" successfully" May 14 00:01:24.465263 containerd[1514]: time="2025-05-14T00:01:24.463314810Z" level=info msg="StopPodSandbox for \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\" returns successfully" May 14 00:01:24.466877 containerd[1514]: time="2025-05-14T00:01:24.466507382Z" level=info msg="TearDown network for sandbox \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\" successfully" May 14 00:01:24.466877 containerd[1514]: time="2025-05-14T00:01:24.466551310Z" level=info msg="StopPodSandbox for \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\" returns successfully" May 14 00:01:24.467789 containerd[1514]: time="2025-05-14T00:01:24.467556568Z" level=info msg="StopPodSandbox for \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\"" May 14 00:01:24.467789 containerd[1514]: time="2025-05-14T00:01:24.467592691Z" level=info msg="StopPodSandbox for \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\"" May 14 00:01:24.467789 containerd[1514]: time="2025-05-14T00:01:24.467721380Z" level=info msg="TearDown network for sandbox \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\" successfully" May 14 00:01:24.467789 containerd[1514]: time="2025-05-14T00:01:24.467736530Z" level=info msg="StopPodSandbox for \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\" returns successfully" May 14 00:01:24.472161 kubelet[2707]: I0514 00:01:24.472108 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22" May 14 00:01:24.475338 kubelet[2707]: I0514 00:01:24.475253 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7" May 14 00:01:24.488201 kubelet[2707]: I0514 00:01:24.487323 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60" May 14 00:01:24.494976 kubelet[2707]: I0514 00:01:24.494465 2707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f" May 14 00:01:24.506729 containerd[1514]: time="2025-05-14T00:01:24.469164136Z" level=info msg="TearDown network for sandbox \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\" successfully" May 14 00:01:24.506729 containerd[1514]: time="2025-05-14T00:01:24.504021317Z" level=info msg="StopPodSandbox for \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\" returns successfully" May 14 00:01:24.506729 containerd[1514]: time="2025-05-14T00:01:24.469733350Z" level=info msg="StopPodSandbox for \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\"" May 14 00:01:24.506729 containerd[1514]: time="2025-05-14T00:01:24.504290909Z" level=info msg="TearDown network for sandbox \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\" successfully" May 14 00:01:24.506729 containerd[1514]: time="2025-05-14T00:01:24.504307492Z" level=info msg="StopPodSandbox for \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\" returns successfully" May 14 00:01:24.506729 containerd[1514]: time="2025-05-14T00:01:24.472775529Z" level=info msg="StopPodSandbox for \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\"" May 14 00:01:24.506729 containerd[1514]: time="2025-05-14T00:01:24.504572885Z" level=info msg="Ensure that sandbox a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22 in task-service has been cleanup successfully" May 14 00:01:24.507886 systemd[1]: run-netns-cni\x2db192caee\x2de061\x2d5434\x2d74f0\x2d92a390ded517.mount: Deactivated successfully. May 14 00:01:24.509990 containerd[1514]: time="2025-05-14T00:01:24.475987239Z" level=info msg="StopPodSandbox for \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\"" May 14 00:01:24.510289 containerd[1514]: time="2025-05-14T00:01:24.510220406Z" level=info msg="Ensure that sandbox 56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7 in task-service has been cleanup successfully" May 14 00:01:24.512985 systemd[1]: run-netns-cni\x2dccff2ed9\x2d6c2f\x2d55d1\x2d1d84\x2d6d6110e1ef05.mount: Deactivated successfully. May 14 00:01:24.516181 containerd[1514]: time="2025-05-14T00:01:24.515938800Z" level=info msg="StopPodSandbox for \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\"" May 14 00:01:24.516181 containerd[1514]: time="2025-05-14T00:01:24.516130986Z" level=info msg="TearDown network for sandbox \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\" successfully" May 14 00:01:24.516181 containerd[1514]: time="2025-05-14T00:01:24.516167960Z" level=info msg="StopPodSandbox for \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\" returns successfully" May 14 00:01:24.516442 containerd[1514]: time="2025-05-14T00:01:24.516274163Z" level=info msg="StopPodSandbox for \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\"" May 14 00:01:24.516442 containerd[1514]: time="2025-05-14T00:01:24.490054288Z" level=info msg="StopPodSandbox for \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\"" May 14 00:01:24.516800 containerd[1514]: time="2025-05-14T00:01:24.516750550Z" level=info msg="Ensure that sandbox 55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60 in task-service has been cleanup successfully" May 14 00:01:24.519145 containerd[1514]: time="2025-05-14T00:01:24.518048086Z" level=info msg="TearDown network for sandbox \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\" successfully" May 14 00:01:24.519145 containerd[1514]: time="2025-05-14T00:01:24.518101302Z" level=info msg="StopPodSandbox for \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\" returns successfully" May 14 00:01:24.519145 containerd[1514]: time="2025-05-14T00:01:24.518779475Z" level=info msg="StopPodSandbox for \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\"" May 14 00:01:24.519145 containerd[1514]: time="2025-05-14T00:01:24.518952042Z" level=info msg="TearDown network for sandbox \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\" successfully" May 14 00:01:24.519145 containerd[1514]: time="2025-05-14T00:01:24.518970308Z" level=info msg="StopPodSandbox for \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\" returns successfully" May 14 00:01:24.519145 containerd[1514]: time="2025-05-14T00:01:24.519059056Z" level=info msg="StopPodSandbox for \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\"" May 14 00:01:24.519391 containerd[1514]: time="2025-05-14T00:01:24.519206692Z" level=info msg="TearDown network for sandbox \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\" successfully" May 14 00:01:24.519391 containerd[1514]: time="2025-05-14T00:01:24.519222344Z" level=info msg="StopPodSandbox for \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\" returns successfully" May 14 00:01:24.519611 kubelet[2707]: E0514 00:01:24.519582 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:24.520430 containerd[1514]: time="2025-05-14T00:01:24.520114516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s9vl9,Uid:55c8484c-9be6-4d0c-a1af-577996ceaea2,Namespace:kube-system,Attempt:5,}" May 14 00:01:24.522399 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 00:01:24.522488 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 00:01:24.522518 containerd[1514]: time="2025-05-14T00:01:24.522201327Z" level=info msg="TearDown network for sandbox \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\" successfully" May 14 00:01:24.522518 containerd[1514]: time="2025-05-14T00:01:24.522246978Z" level=info msg="StopPodSandbox for \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\" returns successfully" May 14 00:01:24.522518 containerd[1514]: time="2025-05-14T00:01:24.522461339Z" level=info msg="StopPodSandbox for \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\"" May 14 00:01:24.522518 containerd[1514]: time="2025-05-14T00:01:24.522472311Z" level=info msg="TearDown network for sandbox \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\" successfully" May 14 00:01:24.522518 containerd[1514]: time="2025-05-14T00:01:24.522511309Z" level=info msg="StopPodSandbox for \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\" returns successfully" May 14 00:01:24.522738 containerd[1514]: time="2025-05-14T00:01:24.495199201Z" level=info msg="StopPodSandbox for \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\"" May 14 00:01:24.522738 containerd[1514]: time="2025-05-14T00:01:24.522608755Z" level=info msg="TearDown network for sandbox \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\" successfully" May 14 00:01:24.522738 containerd[1514]: time="2025-05-14T00:01:24.522627122Z" level=info msg="StopPodSandbox for \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\" returns successfully" May 14 00:01:24.523440 kubelet[2707]: E0514 00:01:24.523028 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:24.523566 containerd[1514]: time="2025-05-14T00:01:24.523319131Z" level=info msg="TearDown network for sandbox \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\" successfully" May 14 00:01:24.523566 containerd[1514]: time="2025-05-14T00:01:24.523341716Z" level=info msg="StopPodSandbox for \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\" returns successfully" May 14 00:01:24.523723 containerd[1514]: time="2025-05-14T00:01:24.523557119Z" level=info msg="Ensure that sandbox e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f in task-service has been cleanup successfully" May 14 00:01:24.526395 containerd[1514]: time="2025-05-14T00:01:24.525340541Z" level=info msg="TearDown network for sandbox \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\" successfully" May 14 00:01:24.526395 containerd[1514]: time="2025-05-14T00:01:24.525372304Z" level=info msg="StopPodSandbox for \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\" returns successfully" May 14 00:01:24.526736 containerd[1514]: time="2025-05-14T00:01:24.526646223Z" level=info msg="StopPodSandbox for \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\"" May 14 00:01:24.526999 containerd[1514]: time="2025-05-14T00:01:24.526831635Z" level=info msg="TearDown network for sandbox \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\" successfully" May 14 00:01:24.526999 containerd[1514]: time="2025-05-14T00:01:24.526881456Z" level=info msg="StopPodSandbox for \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\" returns successfully" May 14 00:01:24.526999 containerd[1514]: time="2025-05-14T00:01:24.526847397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l7hft,Uid:35de77c4-421f-48ed-ab25-a4fe1879067a,Namespace:kube-system,Attempt:5,}" May 14 00:01:24.527448 containerd[1514]: time="2025-05-14T00:01:24.527414255Z" level=info msg="StopPodSandbox for \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\"" May 14 00:01:24.528266 containerd[1514]: time="2025-05-14T00:01:24.527548395Z" level=info msg="StopPodSandbox for \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\"" May 14 00:01:24.528266 containerd[1514]: time="2025-05-14T00:01:24.527662704Z" level=info msg="TearDown network for sandbox \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\" successfully" May 14 00:01:24.528266 containerd[1514]: time="2025-05-14T00:01:24.527965723Z" level=info msg="StopPodSandbox for \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\" returns successfully" May 14 00:01:24.528266 containerd[1514]: time="2025-05-14T00:01:24.527741091Z" level=info msg="StopPodSandbox for \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\"" May 14 00:01:24.528266 containerd[1514]: time="2025-05-14T00:01:24.528159592Z" level=info msg="TearDown network for sandbox \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\" successfully" May 14 00:01:24.528266 containerd[1514]: time="2025-05-14T00:01:24.528178450Z" level=info msg="StopPodSandbox for \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\" returns successfully" May 14 00:01:24.528266 containerd[1514]: time="2025-05-14T00:01:24.527932776Z" level=info msg="StopPodSandbox for \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\"" May 14 00:01:24.528493 containerd[1514]: time="2025-05-14T00:01:24.527559146Z" level=info msg="TearDown network for sandbox \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\" successfully" May 14 00:01:24.528493 containerd[1514]: time="2025-05-14T00:01:24.528353421Z" level=info msg="StopPodSandbox for \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\" returns successfully" May 14 00:01:24.528493 containerd[1514]: time="2025-05-14T00:01:24.528485456Z" level=info msg="TearDown network for sandbox \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\" successfully" May 14 00:01:24.528585 containerd[1514]: time="2025-05-14T00:01:24.528502581Z" level=info msg="StopPodSandbox for \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\" returns successfully" May 14 00:01:24.531809 containerd[1514]: time="2025-05-14T00:01:24.531750212Z" level=info msg="StopPodSandbox for \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\"" May 14 00:01:24.531991 containerd[1514]: time="2025-05-14T00:01:24.531948049Z" level=info msg="TearDown network for sandbox \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\" successfully" May 14 00:01:24.531991 containerd[1514]: time="2025-05-14T00:01:24.531980014Z" level=info msg="StopPodSandbox for \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\" returns successfully" May 14 00:01:24.532097 containerd[1514]: time="2025-05-14T00:01:24.532070847Z" level=info msg="StopPodSandbox for \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\"" May 14 00:01:24.532181 containerd[1514]: time="2025-05-14T00:01:24.532158172Z" level=info msg="TearDown network for sandbox \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\" successfully" May 14 00:01:24.532181 containerd[1514]: time="2025-05-14T00:01:24.532175446Z" level=info msg="StopPodSandbox for \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\" returns successfully" May 14 00:01:24.532294 containerd[1514]: time="2025-05-14T00:01:24.532219715Z" level=info msg="StopPodSandbox for \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\"" May 14 00:01:24.532338 containerd[1514]: time="2025-05-14T00:01:24.532320749Z" level=info msg="TearDown network for sandbox \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\" successfully" May 14 00:01:24.532338 containerd[1514]: time="2025-05-14T00:01:24.532333904Z" level=info msg="StopPodSandbox for \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\" returns successfully" May 14 00:01:24.532454 containerd[1514]: time="2025-05-14T00:01:24.532425127Z" level=info msg="StopPodSandbox for \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\"" May 14 00:01:24.532565 containerd[1514]: time="2025-05-14T00:01:24.532536301Z" level=info msg="TearDown network for sandbox \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\" successfully" May 14 00:01:24.532565 containerd[1514]: time="2025-05-14T00:01:24.532558265Z" level=info msg="StopPodSandbox for \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\" returns successfully" May 14 00:01:24.535874 containerd[1514]: time="2025-05-14T00:01:24.535641788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8nw4,Uid:fff7752d-bd43-4c8e-a187-4d071bd5cd0b,Namespace:calico-system,Attempt:4,}" May 14 00:01:24.535874 containerd[1514]: time="2025-05-14T00:01:24.535786909Z" level=info msg="StopPodSandbox for \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\"" May 14 00:01:24.537623 containerd[1514]: time="2025-05-14T00:01:24.537292162Z" level=info msg="TearDown network for sandbox \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\" successfully" May 14 00:01:24.537623 containerd[1514]: time="2025-05-14T00:01:24.537605831Z" level=info msg="StopPodSandbox for \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\" returns successfully" May 14 00:01:24.537798 containerd[1514]: time="2025-05-14T00:01:24.537479478Z" level=info msg="StopPodSandbox for \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\"" May 14 00:01:24.537980 containerd[1514]: time="2025-05-14T00:01:24.537848348Z" level=info msg="TearDown network for sandbox \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\" successfully" May 14 00:01:24.537980 containerd[1514]: time="2025-05-14T00:01:24.537874241Z" level=info msg="StopPodSandbox for \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\" returns successfully" May 14 00:01:24.537980 containerd[1514]: time="2025-05-14T00:01:24.537514318Z" level=info msg="StopPodSandbox for \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\"" May 14 00:01:24.538083 containerd[1514]: time="2025-05-14T00:01:24.538034613Z" level=info msg="TearDown network for sandbox \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\" successfully" May 14 00:01:24.538083 containerd[1514]: time="2025-05-14T00:01:24.538048431Z" level=info msg="StopPodSandbox for \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\" returns successfully" May 14 00:01:24.541170 containerd[1514]: time="2025-05-14T00:01:24.540667120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f4f89fbf-94z7j,Uid:2df24035-39a4-4f27-b78b-89f954595966,Namespace:calico-system,Attempt:4,}" May 14 00:01:24.541355 containerd[1514]: time="2025-05-14T00:01:24.540900287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-b4gq4,Uid:1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37,Namespace:calico-apiserver,Attempt:4,}" May 14 00:01:24.541828 containerd[1514]: time="2025-05-14T00:01:24.540974406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-k8qvz,Uid:2e76bb18-a688-4a74-8c25-969dbaf341ad,Namespace:calico-apiserver,Attempt:4,}" May 14 00:01:25.167168 systemd[1]: run-netns-cni\x2d98014cd2\x2de9b0\x2daccc\x2d12d5\x2dbb648677d019.mount: Deactivated successfully. May 14 00:01:25.167291 systemd[1]: run-netns-cni\x2da805b029\x2dd283\x2db51e\x2d9d87\x2d1f2a452fab18.mount: Deactivated successfully. May 14 00:01:25.499014 kubelet[2707]: E0514 00:01:25.497495 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:26.794244 systemd-networkd[1443]: calif221ba846b1: Link UP May 14 00:01:26.794479 systemd-networkd[1443]: calif221ba846b1: Gained carrier May 14 00:01:26.823278 kubelet[2707]: I0514 00:01:26.817884 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9mg69" podStartSLOduration=4.432208886 podStartE2EDuration="34.817858729s" podCreationTimestamp="2025-05-14 00:00:52 +0000 UTC" firstStartedPulling="2025-05-14 00:00:53.344372918 +0000 UTC m=+25.288603823" lastFinishedPulling="2025-05-14 00:01:23.73002276 +0000 UTC m=+55.674253666" observedRunningTime="2025-05-14 00:01:24.672739297 +0000 UTC m=+56.616970212" watchObservedRunningTime="2025-05-14 00:01:26.817858729 +0000 UTC m=+58.762089634" May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.216 [INFO][4741] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.300 [INFO][4741] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--s9vl9-eth0 coredns-7db6d8ff4d- kube-system 55c8484c-9be6-4d0c-a1af-577996ceaea2 784 0 2025-05-14 00:00:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-s9vl9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif221ba846b1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s9vl9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s9vl9-" May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.301 [INFO][4741] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s9vl9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s9vl9-eth0" May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.724 [INFO][4777] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" HandleID="k8s-pod-network.0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" Workload="localhost-k8s-coredns--7db6d8ff4d--s9vl9-eth0" May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.743 [INFO][4777] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" HandleID="k8s-pod-network.0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" Workload="localhost-k8s-coredns--7db6d8ff4d--s9vl9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032f6c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-s9vl9", "timestamp":"2025-05-14 00:01:26.724286872 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.743 [INFO][4777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.747 [INFO][4777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.747 [INFO][4777] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.749 [INFO][4777] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" host="localhost" May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.756 [INFO][4777] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.761 [INFO][4777] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.763 [INFO][4777] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.765 [INFO][4777] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.765 [INFO][4777] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" host="localhost" May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.768 [INFO][4777] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.773 [INFO][4777] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" host="localhost" May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.778 [INFO][4777] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" host="localhost" May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.778 [INFO][4777] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" host="localhost" May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.778 [INFO][4777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:26.824609 containerd[1514]: 2025-05-14 00:01:26.778 [INFO][4777] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" HandleID="k8s-pod-network.0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" Workload="localhost-k8s-coredns--7db6d8ff4d--s9vl9-eth0" May 14 00:01:26.825645 containerd[1514]: 2025-05-14 00:01:26.781 [INFO][4741] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s9vl9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s9vl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--s9vl9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"55c8484c-9be6-4d0c-a1af-577996ceaea2", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-s9vl9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif221ba846b1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:26.825645 containerd[1514]: 2025-05-14 00:01:26.781 [INFO][4741] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s9vl9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s9vl9-eth0" May 14 00:01:26.825645 containerd[1514]: 2025-05-14 00:01:26.781 [INFO][4741] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif221ba846b1 ContainerID="0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s9vl9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s9vl9-eth0" May 14 00:01:26.825645 containerd[1514]: 2025-05-14 00:01:26.795 [INFO][4741] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s9vl9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s9vl9-eth0" May 14 00:01:26.825645 containerd[1514]: 2025-05-14 00:01:26.795 [INFO][4741] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s9vl9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s9vl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--s9vl9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"55c8484c-9be6-4d0c-a1af-577996ceaea2", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de", Pod:"coredns-7db6d8ff4d-s9vl9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif221ba846b1", MAC:"4e:a1:74:84:55:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:26.825645 containerd[1514]: 2025-05-14 00:01:26.817 [INFO][4741] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-s9vl9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--s9vl9-eth0" May 14 00:01:26.849180 systemd-networkd[1443]: cali05ad13f76e8: Link UP May 14 00:01:26.849410 systemd-networkd[1443]: cali05ad13f76e8: Gained carrier May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:25.963 [INFO][4726] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.202 [INFO][4726] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--l7hft-eth0 coredns-7db6d8ff4d- kube-system 35de77c4-421f-48ed-ab25-a4fe1879067a 787 0 2025-05-14 00:00:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-l7hft eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali05ad13f76e8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l7hft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l7hft-" May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.224 [INFO][4726] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l7hft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l7hft-eth0" May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.724 [INFO][4784] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" HandleID="k8s-pod-network.b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" Workload="localhost-k8s-coredns--7db6d8ff4d--l7hft-eth0" May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.747 [INFO][4784] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" HandleID="k8s-pod-network.b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" Workload="localhost-k8s-coredns--7db6d8ff4d--l7hft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038a8c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-l7hft", "timestamp":"2025-05-14 00:01:26.724756805 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.747 [INFO][4784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.778 [INFO][4784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.778 [INFO][4784] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.781 [INFO][4784] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" host="localhost" May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.786 [INFO][4784] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.793 [INFO][4784] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.798 [INFO][4784] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.802 [INFO][4784] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.802 [INFO][4784] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" host="localhost" May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.805 [INFO][4784] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694 May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.823 [INFO][4784] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" host="localhost" May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.836 [INFO][4784] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" host="localhost" May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.836 [INFO][4784] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" host="localhost" May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.836 [INFO][4784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:26.869484 containerd[1514]: 2025-05-14 00:01:26.836 [INFO][4784] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" HandleID="k8s-pod-network.b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" Workload="localhost-k8s-coredns--7db6d8ff4d--l7hft-eth0" May 14 00:01:26.870505 containerd[1514]: 2025-05-14 00:01:26.840 [INFO][4726] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l7hft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l7hft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--l7hft-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"35de77c4-421f-48ed-ab25-a4fe1879067a", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-l7hft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali05ad13f76e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:26.870505 containerd[1514]: 2025-05-14 00:01:26.840 [INFO][4726] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l7hft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l7hft-eth0" May 14 00:01:26.870505 containerd[1514]: 2025-05-14 00:01:26.840 [INFO][4726] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05ad13f76e8 ContainerID="b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l7hft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l7hft-eth0" May 14 00:01:26.870505 containerd[1514]: 2025-05-14 00:01:26.844 [INFO][4726] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l7hft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l7hft-eth0" May 14 00:01:26.870505 containerd[1514]: 2025-05-14 00:01:26.844 [INFO][4726] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l7hft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l7hft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--l7hft-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"35de77c4-421f-48ed-ab25-a4fe1879067a", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694", Pod:"coredns-7db6d8ff4d-l7hft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali05ad13f76e8", MAC:"ae:fe:bb:82:57:08", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:26.870505 containerd[1514]: 2025-05-14 00:01:26.863 [INFO][4726] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-l7hft" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--l7hft-eth0" May 14 00:01:26.962629 containerd[1514]: time="2025-05-14T00:01:26.962255039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:01:26.962629 containerd[1514]: time="2025-05-14T00:01:26.962346372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:01:26.962629 containerd[1514]: time="2025-05-14T00:01:26.962359308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:01:26.962629 containerd[1514]: time="2025-05-14T00:01:26.962473316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:01:26.971435 systemd-networkd[1443]: calidf841addfdf: Link UP May 14 00:01:26.971660 systemd-networkd[1443]: calidf841addfdf: Gained carrier May 14 00:01:26.992986 systemd[1]: Started cri-containerd-0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de.scope - libcontainer container 0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de. May 14 00:01:27.006055 containerd[1514]: time="2025-05-14T00:01:27.005893464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:01:27.006055 containerd[1514]: time="2025-05-14T00:01:27.005971209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:01:27.006055 containerd[1514]: time="2025-05-14T00:01:27.005984826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:01:27.006788 containerd[1514]: time="2025-05-14T00:01:27.006075999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:01:27.008865 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:01:27.028901 systemd[1]: Started cri-containerd-b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694.scope - libcontainer container b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694. May 14 00:01:27.040642 containerd[1514]: time="2025-05-14T00:01:27.040589569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s9vl9,Uid:55c8484c-9be6-4d0c-a1af-577996ceaea2,Namespace:kube-system,Attempt:5,} returns sandbox id \"0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de\"" May 14 00:01:27.042893 kubelet[2707]: E0514 00:01:27.041576 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:27.047046 containerd[1514]: time="2025-05-14T00:01:27.045995346Z" level=info msg="CreateContainer within sandbox \"0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:01:27.047174 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.451 [INFO][4806] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.567 [INFO][4806] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--897df95dc--k8qvz-eth0 calico-apiserver-897df95dc- calico-apiserver 2e76bb18-a688-4a74-8c25-969dbaf341ad 788 0 2025-05-14 00:00:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:897df95dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-897df95dc-k8qvz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidf841addfdf [] []}} ContainerID="6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-k8qvz" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--k8qvz-" May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.567 [INFO][4806] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-k8qvz" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--k8qvz-eth0" May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.724 [INFO][4835] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" HandleID="k8s-pod-network.6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" Workload="localhost-k8s-calico--apiserver--897df95dc--k8qvz-eth0" May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.747 [INFO][4835] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" HandleID="k8s-pod-network.6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" Workload="localhost-k8s-calico--apiserver--897df95dc--k8qvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000270a60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-897df95dc-k8qvz", "timestamp":"2025-05-14 00:01:26.724050398 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.747 [INFO][4835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.836 [INFO][4835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.838 [INFO][4835] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.843 [INFO][4835] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" host="localhost" May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.864 [INFO][4835] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.886 [INFO][4835] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.889 [INFO][4835] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.894 [INFO][4835] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.894 [INFO][4835] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" host="localhost" May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.897 [INFO][4835] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1 May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.919 [INFO][4835] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" host="localhost" May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.959 [INFO][4835] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" host="localhost" May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.959 [INFO][4835] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" host="localhost" May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.959 [INFO][4835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:27.078622 containerd[1514]: 2025-05-14 00:01:26.959 [INFO][4835] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" HandleID="k8s-pod-network.6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" Workload="localhost-k8s-calico--apiserver--897df95dc--k8qvz-eth0" May 14 00:01:27.079381 containerd[1514]: 2025-05-14 00:01:26.963 [INFO][4806] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-k8qvz" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--k8qvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--897df95dc--k8qvz-eth0", GenerateName:"calico-apiserver-897df95dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e76bb18-a688-4a74-8c25-969dbaf341ad", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"897df95dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-897df95dc-k8qvz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf841addfdf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:27.079381 containerd[1514]: 2025-05-14 00:01:26.963 [INFO][4806] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-k8qvz" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--k8qvz-eth0" May 14 00:01:27.079381 containerd[1514]: 2025-05-14 00:01:26.963 [INFO][4806] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf841addfdf ContainerID="6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-k8qvz" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--k8qvz-eth0" May 14 00:01:27.079381 containerd[1514]: 2025-05-14 00:01:26.971 [INFO][4806] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-k8qvz" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--k8qvz-eth0" May 14 00:01:27.079381 containerd[1514]: 2025-05-14 00:01:26.972 [INFO][4806] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-k8qvz" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--k8qvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--897df95dc--k8qvz-eth0", GenerateName:"calico-apiserver-897df95dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e76bb18-a688-4a74-8c25-969dbaf341ad", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"897df95dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1", Pod:"calico-apiserver-897df95dc-k8qvz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf841addfdf", MAC:"ce:3d:40:ed:44:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:27.079381 containerd[1514]: 2025-05-14 00:01:27.073 [INFO][4806] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-k8qvz" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--k8qvz-eth0" May 14 00:01:27.080098 containerd[1514]: time="2025-05-14T00:01:27.079525664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l7hft,Uid:35de77c4-421f-48ed-ab25-a4fe1879067a,Namespace:kube-system,Attempt:5,} returns sandbox id \"b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694\"" May 14 00:01:27.081297 kubelet[2707]: E0514 00:01:27.081253 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:27.090357 containerd[1514]: time="2025-05-14T00:01:27.090309643Z" level=info msg="CreateContainer within sandbox \"b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:01:27.111912 containerd[1514]: time="2025-05-14T00:01:27.111846037Z" level=info msg="CreateContainer within sandbox \"b75f0f38530178bfd24ded229663dad0a46a384effc3123c16b3ebc34c2a1694\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c71a21cd7bc3da2756f19ba60adbf721b2265e6e1ab4f603acc6c06f02edbcda\"" May 14 00:01:27.113195 containerd[1514]: time="2025-05-14T00:01:27.113118087Z" level=info msg="StartContainer for \"c71a21cd7bc3da2756f19ba60adbf721b2265e6e1ab4f603acc6c06f02edbcda\"" May 14 00:01:27.121982 containerd[1514]: time="2025-05-14T00:01:27.121654790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:01:27.121982 containerd[1514]: time="2025-05-14T00:01:27.121903309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:01:27.121982 containerd[1514]: time="2025-05-14T00:01:27.121914512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:01:27.122580 containerd[1514]: time="2025-05-14T00:01:27.122552180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:01:27.129712 systemd-networkd[1443]: calie3b9cb92f91: Link UP May 14 00:01:27.130065 systemd-networkd[1443]: calie3b9cb92f91: Gained carrier May 14 00:01:27.148102 systemd[1]: Started cri-containerd-6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1.scope - libcontainer container 6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1. May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:26.347 [INFO][4773] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:26.571 [INFO][4773] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--54f4f89fbf--94z7j-eth0 calico-kube-controllers-54f4f89fbf- calico-system 2df24035-39a4-4f27-b78b-89f954595966 790 0 2025-05-14 00:00:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54f4f89fbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-54f4f89fbf-94z7j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie3b9cb92f91 [] []}} ContainerID="12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" Namespace="calico-system" Pod="calico-kube-controllers-54f4f89fbf-94z7j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54f4f89fbf--94z7j-" May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:26.572 [INFO][4773] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" Namespace="calico-system" Pod="calico-kube-controllers-54f4f89fbf-94z7j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54f4f89fbf--94z7j-eth0" May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:26.724 [INFO][4838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" HandleID="k8s-pod-network.12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" Workload="localhost-k8s-calico--kube--controllers--54f4f89fbf--94z7j-eth0" May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:26.748 [INFO][4838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" HandleID="k8s-pod-network.12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" Workload="localhost-k8s-calico--kube--controllers--54f4f89fbf--94z7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00052cf80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-54f4f89fbf-94z7j", "timestamp":"2025-05-14 00:01:26.723978263 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:26.748 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:26.959 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:26.959 [INFO][4838] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:27.074 [INFO][4838] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" host="localhost" May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:27.084 [INFO][4838] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:27.090 [INFO][4838] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:27.092 [INFO][4838] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:27.095 [INFO][4838] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:27.095 [INFO][4838] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" host="localhost" May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:27.097 [INFO][4838] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97 May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:27.103 [INFO][4838] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" host="localhost" May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:27.114 [INFO][4838] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" host="localhost" May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:27.114 [INFO][4838] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" host="localhost" May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:27.114 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:27.148840 containerd[1514]: 2025-05-14 00:01:27.114 [INFO][4838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" HandleID="k8s-pod-network.12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" Workload="localhost-k8s-calico--kube--controllers--54f4f89fbf--94z7j-eth0" May 14 00:01:27.149609 containerd[1514]: 2025-05-14 00:01:27.120 [INFO][4773] cni-plugin/k8s.go 386: Populated endpoint ContainerID="12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" Namespace="calico-system" Pod="calico-kube-controllers-54f4f89fbf-94z7j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54f4f89fbf--94z7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54f4f89fbf--94z7j-eth0", GenerateName:"calico-kube-controllers-54f4f89fbf-", Namespace:"calico-system", SelfLink:"", UID:"2df24035-39a4-4f27-b78b-89f954595966", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54f4f89fbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-54f4f89fbf-94z7j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3b9cb92f91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:27.149609 containerd[1514]: 2025-05-14 00:01:27.120 [INFO][4773] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" Namespace="calico-system" Pod="calico-kube-controllers-54f4f89fbf-94z7j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54f4f89fbf--94z7j-eth0" May 14 00:01:27.149609 containerd[1514]: 2025-05-14 00:01:27.120 [INFO][4773] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3b9cb92f91 ContainerID="12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" Namespace="calico-system" Pod="calico-kube-controllers-54f4f89fbf-94z7j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54f4f89fbf--94z7j-eth0" May 14 00:01:27.149609 containerd[1514]: 2025-05-14 00:01:27.130 [INFO][4773] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" Namespace="calico-system" Pod="calico-kube-controllers-54f4f89fbf-94z7j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54f4f89fbf--94z7j-eth0" May 14 00:01:27.149609 containerd[1514]: 2025-05-14 00:01:27.131 [INFO][4773] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" Namespace="calico-system" Pod="calico-kube-controllers-54f4f89fbf-94z7j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54f4f89fbf--94z7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54f4f89fbf--94z7j-eth0", GenerateName:"calico-kube-controllers-54f4f89fbf-", Namespace:"calico-system", SelfLink:"", UID:"2df24035-39a4-4f27-b78b-89f954595966", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54f4f89fbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97", Pod:"calico-kube-controllers-54f4f89fbf-94z7j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3b9cb92f91", MAC:"ae:94:f4:3e:ee:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:27.149609 containerd[1514]: 2025-05-14 00:01:27.141 [INFO][4773] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97" Namespace="calico-system" Pod="calico-kube-controllers-54f4f89fbf-94z7j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54f4f89fbf--94z7j-eth0" May 14 00:01:27.170061 containerd[1514]: time="2025-05-14T00:01:27.170014547Z" level=info msg="CreateContainer within sandbox \"0a753989b2d7a474fa4a26aa5920aca51882bda39a7ccf603ce650310a3bd9de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"adf957e861060b227820357d1c8f2da72313df01a01e147aef9f3cc098d06811\"" May 14 00:01:27.171778 containerd[1514]: time="2025-05-14T00:01:27.171369414Z" level=info msg="StartContainer for \"adf957e861060b227820357d1c8f2da72313df01a01e147aef9f3cc098d06811\"" May 14 00:01:27.182938 systemd[1]: Started cri-containerd-c71a21cd7bc3da2756f19ba60adbf721b2265e6e1ab4f603acc6c06f02edbcda.scope - libcontainer container c71a21cd7bc3da2756f19ba60adbf721b2265e6e1ab4f603acc6c06f02edbcda. May 14 00:01:27.192694 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:01:27.192851 containerd[1514]: time="2025-05-14T00:01:27.192447740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:01:27.192851 containerd[1514]: time="2025-05-14T00:01:27.192520977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:01:27.192851 containerd[1514]: time="2025-05-14T00:01:27.192535115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:01:27.192851 containerd[1514]: time="2025-05-14T00:01:27.192636779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:01:27.221075 systemd-networkd[1443]: caliab3aafd4884: Link UP May 14 00:01:27.222863 systemd-networkd[1443]: caliab3aafd4884: Gained carrier May 14 00:01:27.231945 systemd[1]: Started cri-containerd-adf957e861060b227820357d1c8f2da72313df01a01e147aef9f3cc098d06811.scope - libcontainer container adf957e861060b227820357d1c8f2da72313df01a01e147aef9f3cc098d06811. May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:26.403 [INFO][4792] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:26.570 [INFO][4792] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--897df95dc--b4gq4-eth0 calico-apiserver-897df95dc- calico-apiserver 1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37 789 0 2025-05-14 00:00:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:897df95dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-897df95dc-b4gq4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliab3aafd4884 [] []}} ContainerID="75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-b4gq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--b4gq4-" May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:26.570 [INFO][4792] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-b4gq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--b4gq4-eth0" May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:26.724 [INFO][4836] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" HandleID="k8s-pod-network.75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" Workload="localhost-k8s-calico--apiserver--897df95dc--b4gq4-eth0" May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:26.749 [INFO][4836] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" HandleID="k8s-pod-network.75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" Workload="localhost-k8s-calico--apiserver--897df95dc--b4gq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004b28a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-897df95dc-b4gq4", "timestamp":"2025-05-14 00:01:26.724047021 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:26.749 [INFO][4836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:27.114 [INFO][4836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:27.115 [INFO][4836] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:27.119 [INFO][4836] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" host="localhost" May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:27.128 [INFO][4836] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:27.136 [INFO][4836] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:27.143 [INFO][4836] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:27.149 [INFO][4836] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:27.149 [INFO][4836] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" host="localhost" May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:27.152 [INFO][4836] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192 May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:27.160 [INFO][4836] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" host="localhost" May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:27.178 [INFO][4836] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" host="localhost" May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:27.178 [INFO][4836] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" host="localhost" May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:27.178 [INFO][4836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:27.257177 containerd[1514]: 2025-05-14 00:01:27.178 [INFO][4836] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" HandleID="k8s-pod-network.75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" Workload="localhost-k8s-calico--apiserver--897df95dc--b4gq4-eth0" May 14 00:01:27.258648 containerd[1514]: 2025-05-14 00:01:27.182 [INFO][4792] cni-plugin/k8s.go 386: Populated endpoint ContainerID="75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-b4gq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--b4gq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--897df95dc--b4gq4-eth0", GenerateName:"calico-apiserver-897df95dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"897df95dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-897df95dc-b4gq4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab3aafd4884", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:27.258648 containerd[1514]: 2025-05-14 00:01:27.182 [INFO][4792] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-b4gq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--b4gq4-eth0" May 14 00:01:27.258648 containerd[1514]: 2025-05-14 00:01:27.182 [INFO][4792] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab3aafd4884 ContainerID="75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-b4gq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--b4gq4-eth0" May 14 00:01:27.258648 containerd[1514]: 2025-05-14 00:01:27.223 [INFO][4792] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-b4gq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--b4gq4-eth0" May 14 00:01:27.258648 containerd[1514]: 2025-05-14 00:01:27.223 [INFO][4792] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-b4gq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--b4gq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--897df95dc--b4gq4-eth0", GenerateName:"calico-apiserver-897df95dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"897df95dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192", Pod:"calico-apiserver-897df95dc-b4gq4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab3aafd4884", MAC:"96:12:2d:14:6e:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:27.258648 containerd[1514]: 2025-05-14 00:01:27.241 [INFO][4792] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192" Namespace="calico-apiserver" Pod="calico-apiserver-897df95dc-b4gq4" WorkloadEndpoint="localhost-k8s-calico--apiserver--897df95dc--b4gq4-eth0" May 14 00:01:27.261593 containerd[1514]: time="2025-05-14T00:01:27.261558126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-k8qvz,Uid:2e76bb18-a688-4a74-8c25-969dbaf341ad,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1\"" May 14 00:01:27.264489 containerd[1514]: time="2025-05-14T00:01:27.264453761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 00:01:27.271307 systemd[1]: Started cri-containerd-12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97.scope - libcontainer container 12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97. May 14 00:01:27.277025 systemd-networkd[1443]: cali3d0ecb5e387: Link UP May 14 00:01:27.279809 systemd-networkd[1443]: cali3d0ecb5e387: Gained carrier May 14 00:01:27.299745 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:01:27.314572 containerd[1514]: time="2025-05-14T00:01:27.314429287Z" level=info msg="StartContainer for \"c71a21cd7bc3da2756f19ba60adbf721b2265e6e1ab4f603acc6c06f02edbcda\" returns successfully" May 14 00:01:27.314572 containerd[1514]: time="2025-05-14T00:01:27.314472904Z" level=info msg="StartContainer for \"adf957e861060b227820357d1c8f2da72313df01a01e147aef9f3cc098d06811\" returns successfully" May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:26.286 [INFO][4756] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:26.311 [INFO][4756] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--s8nw4-eth0 csi-node-driver- calico-system fff7752d-bd43-4c8e-a187-4d071bd5cd0b 617 0 2025-05-14 00:00:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-s8nw4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3d0ecb5e387 [] []}} ContainerID="e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" Namespace="calico-system" Pod="csi-node-driver-s8nw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8nw4-" May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:26.311 [INFO][4756] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" Namespace="calico-system" Pod="csi-node-driver-s8nw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8nw4-eth0" May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:26.726 [INFO][4791] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" HandleID="k8s-pod-network.e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" Workload="localhost-k8s-csi--node--driver--s8nw4-eth0" May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:26.749 [INFO][4791] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" HandleID="k8s-pod-network.e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" Workload="localhost-k8s-csi--node--driver--s8nw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f48f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-s8nw4", "timestamp":"2025-05-14 00:01:26.726260251 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:26.749 [INFO][4791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:27.178 [INFO][4791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:27.178 [INFO][4791] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:27.185 [INFO][4791] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" host="localhost" May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:27.215 [INFO][4791] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:27.227 [INFO][4791] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:27.231 [INFO][4791] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:27.239 [INFO][4791] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:27.239 [INFO][4791] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" host="localhost" May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:27.243 [INFO][4791] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:27.252 [INFO][4791] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" host="localhost" May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:27.264 [INFO][4791] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" host="localhost" May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:27.265 [INFO][4791] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" host="localhost" May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:27.265 [INFO][4791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:01:27.314780 containerd[1514]: 2025-05-14 00:01:27.265 [INFO][4791] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" HandleID="k8s-pod-network.e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" Workload="localhost-k8s-csi--node--driver--s8nw4-eth0" May 14 00:01:27.315412 containerd[1514]: 2025-05-14 00:01:27.272 [INFO][4756] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" Namespace="calico-system" Pod="csi-node-driver-s8nw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8nw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s8nw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fff7752d-bd43-4c8e-a187-4d071bd5cd0b", ResourceVersion:"617", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-s8nw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d0ecb5e387", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:27.315412 containerd[1514]: 2025-05-14 00:01:27.272 [INFO][4756] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" Namespace="calico-system" Pod="csi-node-driver-s8nw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8nw4-eth0" May 14 00:01:27.315412 containerd[1514]: 2025-05-14 00:01:27.272 [INFO][4756] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d0ecb5e387 ContainerID="e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" Namespace="calico-system" Pod="csi-node-driver-s8nw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8nw4-eth0" May 14 00:01:27.315412 containerd[1514]: 2025-05-14 00:01:27.280 [INFO][4756] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" Namespace="calico-system" Pod="csi-node-driver-s8nw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8nw4-eth0" May 14 00:01:27.315412 containerd[1514]: 2025-05-14 00:01:27.281 [INFO][4756] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" Namespace="calico-system" Pod="csi-node-driver-s8nw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8nw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s8nw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fff7752d-bd43-4c8e-a187-4d071bd5cd0b", ResourceVersion:"617", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 0, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e", Pod:"csi-node-driver-s8nw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3d0ecb5e387", MAC:"da:d8:39:7b:42:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:01:27.315412 containerd[1514]: 2025-05-14 00:01:27.307 [INFO][4756] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e" Namespace="calico-system" Pod="csi-node-driver-s8nw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--s8nw4-eth0" May 14 00:01:27.317768 containerd[1514]: time="2025-05-14T00:01:27.314789048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:01:27.317768 containerd[1514]: time="2025-05-14T00:01:27.315661808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:01:27.317768 containerd[1514]: time="2025-05-14T00:01:27.315697049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:01:27.317768 containerd[1514]: time="2025-05-14T00:01:27.315782851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:01:27.345985 systemd[1]: Started cri-containerd-75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192.scope - libcontainer container 75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192. May 14 00:01:27.347516 containerd[1514]: time="2025-05-14T00:01:27.347402408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f4f89fbf-94z7j,Uid:2df24035-39a4-4f27-b78b-89f954595966,Namespace:calico-system,Attempt:4,} returns sandbox id \"12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97\"" May 14 00:01:27.361282 containerd[1514]: time="2025-05-14T00:01:27.361121862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:01:27.361282 containerd[1514]: time="2025-05-14T00:01:27.361216242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:01:27.361282 containerd[1514]: time="2025-05-14T00:01:27.361233135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:01:27.361612 containerd[1514]: time="2025-05-14T00:01:27.361339359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:01:27.376210 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:01:27.390241 systemd[1]: Started cri-containerd-e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e.scope - libcontainer container e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e. May 14 00:01:27.412146 containerd[1514]: time="2025-05-14T00:01:27.411905628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-897df95dc-b4gq4,Uid:1e7c405b-f2c2-4bf1-a1f9-17ef323d3d37,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192\"" May 14 00:01:27.414294 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:01:27.427887 containerd[1514]: time="2025-05-14T00:01:27.427840253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s8nw4,Uid:fff7752d-bd43-4c8e-a187-4d071bd5cd0b,Namespace:calico-system,Attempt:4,} returns sandbox id \"e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e\"" May 14 00:01:27.505766 kubelet[2707]: E0514 00:01:27.505648 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:27.510997 kubelet[2707]: E0514 00:01:27.510966 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:27.878274 systemd[1]: Started sshd@14-10.0.0.99:22-10.0.0.1:43174.service - OpenSSH per-connection server daemon (10.0.0.1:43174). May 14 00:01:28.018405 kubelet[2707]: I0514 00:01:28.017089 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-l7hft" podStartSLOduration=43.01706528 podStartE2EDuration="43.01706528s" podCreationTimestamp="2025-05-14 00:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:01:27.749468523 +0000 UTC m=+59.693699429" watchObservedRunningTime="2025-05-14 00:01:28.01706528 +0000 UTC m=+59.961296185" May 14 00:01:28.034389 sshd[5383]: Accepted publickey for core from 10.0.0.1 port 43174 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:28.036631 sshd-session[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:28.046836 systemd-logind[1492]: New session 15 of user core. May 14 00:01:28.055348 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 00:01:28.133816 containerd[1514]: time="2025-05-14T00:01:28.133499778Z" level=info msg="StopPodSandbox for \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\"" May 14 00:01:28.133816 containerd[1514]: time="2025-05-14T00:01:28.133694198Z" level=info msg="TearDown network for sandbox \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\" successfully" May 14 00:01:28.133816 containerd[1514]: time="2025-05-14T00:01:28.133710270Z" level=info msg="StopPodSandbox for \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\" returns successfully" May 14 00:01:28.154378 containerd[1514]: time="2025-05-14T00:01:28.154228331Z" level=info msg="RemovePodSandbox for \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\"" May 14 00:01:28.174372 containerd[1514]: time="2025-05-14T00:01:28.174302985Z" level=info msg="Forcibly stopping sandbox \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\"" May 14 00:01:28.174526 containerd[1514]: time="2025-05-14T00:01:28.174437103Z" level=info msg="TearDown network for sandbox \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\" successfully" May 14 00:01:28.197880 systemd-networkd[1443]: calif221ba846b1: Gained IPv6LL May 14 00:01:28.198934 systemd-networkd[1443]: calie3b9cb92f91: Gained IPv6LL May 14 00:01:28.289995 sshd[5396]: Connection closed by 10.0.0.1 port 43174 May 14 00:01:28.290365 sshd-session[5383]: pam_unix(sshd:session): session closed for user core May 14 00:01:28.294797 systemd[1]: sshd@14-10.0.0.99:22-10.0.0.1:43174.service: Deactivated successfully. May 14 00:01:28.297412 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:01:28.298146 systemd-logind[1492]: Session 15 logged out. Waiting for processes to exit. May 14 00:01:28.299101 systemd-logind[1492]: Removed session 15. May 14 00:01:28.316910 containerd[1514]: time="2025-05-14T00:01:28.316835455Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:28.317070 containerd[1514]: time="2025-05-14T00:01:28.316939102Z" level=info msg="RemovePodSandbox \"3aa8ff18c3cdd64b9e0733f41c1104c6cff817e7c3439ca1faa6927859f06a4e\" returns successfully" May 14 00:01:28.317732 containerd[1514]: time="2025-05-14T00:01:28.317525006Z" level=info msg="StopPodSandbox for \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\"" May 14 00:01:28.317732 containerd[1514]: time="2025-05-14T00:01:28.317642933Z" level=info msg="TearDown network for sandbox \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\" successfully" May 14 00:01:28.317732 containerd[1514]: time="2025-05-14T00:01:28.317654074Z" level=info msg="StopPodSandbox for \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\" returns successfully" May 14 00:01:28.318111 containerd[1514]: time="2025-05-14T00:01:28.318075068Z" level=info msg="RemovePodSandbox for \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\"" May 14 00:01:28.318166 containerd[1514]: time="2025-05-14T00:01:28.318114908Z" level=info msg="Forcibly stopping sandbox \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\"" May 14 00:01:28.318306 containerd[1514]: time="2025-05-14T00:01:28.318217474Z" level=info msg="TearDown network for sandbox \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\" successfully" May 14 00:01:28.389871 systemd-networkd[1443]: calidf841addfdf: Gained IPv6LL May 14 00:01:28.396739 kernel: bpftool[5423]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 14 00:01:28.409357 containerd[1514]: time="2025-05-14T00:01:28.409241637Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:28.409539 containerd[1514]: time="2025-05-14T00:01:28.409376667Z" level=info msg="RemovePodSandbox \"480ac1495213aa3d0a5a9da72d46e52d8a39d1b487d1435d2ee30cc88cc32f5b\" returns successfully" May 14 00:01:28.410260 containerd[1514]: time="2025-05-14T00:01:28.410004627Z" level=info msg="StopPodSandbox for \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\"" May 14 00:01:28.410260 containerd[1514]: time="2025-05-14T00:01:28.410157573Z" level=info msg="TearDown network for sandbox \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\" successfully" May 14 00:01:28.410260 containerd[1514]: time="2025-05-14T00:01:28.410174146Z" level=info msg="StopPodSandbox for \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\" returns successfully" May 14 00:01:28.411035 containerd[1514]: time="2025-05-14T00:01:28.410650500Z" level=info msg="RemovePodSandbox for \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\"" May 14 00:01:28.411035 containerd[1514]: time="2025-05-14T00:01:28.410717254Z" level=info msg="Forcibly stopping sandbox \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\"" May 14 00:01:28.411035 containerd[1514]: time="2025-05-14T00:01:28.410837405Z" level=info msg="TearDown network for sandbox \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\" successfully" May 14 00:01:28.512013 containerd[1514]: time="2025-05-14T00:01:28.511940554Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:28.512187 containerd[1514]: time="2025-05-14T00:01:28.512061095Z" level=info msg="RemovePodSandbox \"9b7b927985a68231fa7f967eb41bfb07ba34b0eebc0c0e8ac2fed750c334af33\" returns successfully" May 14 00:01:28.512873 containerd[1514]: time="2025-05-14T00:01:28.512813332Z" level=info msg="StopPodSandbox for \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\"" May 14 00:01:28.512978 containerd[1514]: time="2025-05-14T00:01:28.512937070Z" level=info msg="TearDown network for sandbox \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\" successfully" May 14 00:01:28.512978 containerd[1514]: time="2025-05-14T00:01:28.512947341Z" level=info msg="StopPodSandbox for \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\" returns successfully" May 14 00:01:28.513227 containerd[1514]: time="2025-05-14T00:01:28.513130979Z" level=info msg="RemovePodSandbox for \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\"" May 14 00:01:28.513227 containerd[1514]: time="2025-05-14T00:01:28.513156480Z" level=info msg="Forcibly stopping sandbox \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\"" May 14 00:01:28.513351 containerd[1514]: time="2025-05-14T00:01:28.513220598Z" level=info msg="TearDown network for sandbox \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\" successfully" May 14 00:01:28.525484 kubelet[2707]: E0514 00:01:28.525439 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:28.525653 kubelet[2707]: E0514 00:01:28.525591 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:28.665395 kubelet[2707]: I0514 00:01:28.664092 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-s9vl9" podStartSLOduration=43.664072047 podStartE2EDuration="43.664072047s" podCreationTimestamp="2025-05-14 00:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:01:28.017850874 +0000 UTC m=+59.962081779" watchObservedRunningTime="2025-05-14 00:01:28.664072047 +0000 UTC m=+60.608302952" May 14 00:01:28.719758 systemd-networkd[1443]: vxlan.calico: Link UP May 14 00:01:28.719770 systemd-networkd[1443]: vxlan.calico: Gained carrier May 14 00:01:28.782158 containerd[1514]: time="2025-05-14T00:01:28.782060449Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:28.782326 containerd[1514]: time="2025-05-14T00:01:28.782202483Z" level=info msg="RemovePodSandbox \"9c631a2707d847700b09bca982553cb1fefe477a398db7f976618c022fe024f2\" returns successfully" May 14 00:01:28.783442 containerd[1514]: time="2025-05-14T00:01:28.783149922Z" level=info msg="StopPodSandbox for \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\"" May 14 00:01:28.783442 containerd[1514]: time="2025-05-14T00:01:28.783320713Z" level=info msg="TearDown network for sandbox \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\" successfully" May 14 00:01:28.783442 containerd[1514]: time="2025-05-14T00:01:28.783336094Z" level=info msg="StopPodSandbox for \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\" returns successfully" May 14 00:01:28.784527 containerd[1514]: time="2025-05-14T00:01:28.783829823Z" level=info msg="RemovePodSandbox for \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\"" May 14 00:01:28.784527 containerd[1514]: time="2025-05-14T00:01:28.783884633Z" level=info msg="Forcibly stopping sandbox \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\"" May 14 00:01:28.784527 containerd[1514]: time="2025-05-14T00:01:28.784008932Z" level=info msg="TearDown network for sandbox \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\" successfully" May 14 00:01:28.838007 systemd-networkd[1443]: cali05ad13f76e8: Gained IPv6LL May 14 00:01:29.010446 containerd[1514]: time="2025-05-14T00:01:29.010066122Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:29.010446 containerd[1514]: time="2025-05-14T00:01:29.010159399Z" level=info msg="RemovePodSandbox \"6ccc14825bdaece09e2898a998b5b8f834aae17099141aec253fccec8d52283d\" returns successfully" May 14 00:01:29.011629 containerd[1514]: time="2025-05-14T00:01:29.011120330Z" level=info msg="StopPodSandbox for \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\"" May 14 00:01:29.011629 containerd[1514]: time="2025-05-14T00:01:29.011262485Z" level=info msg="TearDown network for sandbox \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\" successfully" May 14 00:01:29.011629 containerd[1514]: time="2025-05-14T00:01:29.011363436Z" level=info msg="StopPodSandbox for \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\" returns successfully" May 14 00:01:29.011838 containerd[1514]: time="2025-05-14T00:01:29.011791643Z" level=info msg="RemovePodSandbox for \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\"" May 14 00:01:29.011894 containerd[1514]: time="2025-05-14T00:01:29.011842304Z" level=info msg="Forcibly stopping sandbox \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\"" May 14 00:01:29.012006 containerd[1514]: time="2025-05-14T00:01:29.011945679Z" level=info msg="TearDown network for sandbox \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\" successfully" May 14 00:01:29.031103 systemd-networkd[1443]: cali3d0ecb5e387: Gained IPv6LL May 14 00:01:29.160785 systemd-networkd[1443]: caliab3aafd4884: Gained IPv6LL May 14 00:01:29.528988 kubelet[2707]: E0514 00:01:29.528953 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:29.529483 kubelet[2707]: E0514 00:01:29.529259 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:29.877215 containerd[1514]: time="2025-05-14T00:01:29.877155779Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:29.877721 containerd[1514]: time="2025-05-14T00:01:29.877240972Z" level=info msg="RemovePodSandbox \"0f935d78b3bce36fd01b8944613455235aa13fedad1474a1d590f477f4001369\" returns successfully" May 14 00:01:29.877760 containerd[1514]: time="2025-05-14T00:01:29.877724217Z" level=info msg="StopPodSandbox for \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\"" May 14 00:01:29.877862 containerd[1514]: time="2025-05-14T00:01:29.877839053Z" level=info msg="TearDown network for sandbox \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\" successfully" May 14 00:01:29.877862 containerd[1514]: time="2025-05-14T00:01:29.877852787Z" level=info msg="StopPodSandbox for \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\" returns successfully" May 14 00:01:29.878699 containerd[1514]: time="2025-05-14T00:01:29.878100201Z" level=info msg="RemovePodSandbox for \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\"" May 14 00:01:29.878699 containerd[1514]: time="2025-05-14T00:01:29.878121510Z" level=info msg="Forcibly stopping sandbox \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\"" May 14 00:01:29.878699 containerd[1514]: time="2025-05-14T00:01:29.878187147Z" level=info msg="TearDown network for sandbox \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\" successfully" May 14 00:01:30.182010 systemd-networkd[1443]: vxlan.calico: Gained IPv6LL May 14 00:01:30.433239 containerd[1514]: time="2025-05-14T00:01:30.432905458Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:30.433896 containerd[1514]: time="2025-05-14T00:01:30.433559834Z" level=info msg="RemovePodSandbox \"a59363c1d3c7d2a1291c2094e94fec02764ae85f2b6b9ee946b3f91ebda7b84d\" returns successfully" May 14 00:01:30.434487 containerd[1514]: time="2025-05-14T00:01:30.434445515Z" level=info msg="StopPodSandbox for \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\"" May 14 00:01:30.434699 containerd[1514]: time="2025-05-14T00:01:30.434568366Z" level=info msg="TearDown network for sandbox \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\" successfully" May 14 00:01:30.434699 containerd[1514]: time="2025-05-14T00:01:30.434625389Z" level=info msg="StopPodSandbox for \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\" returns successfully" May 14 00:01:30.435712 containerd[1514]: time="2025-05-14T00:01:30.435023714Z" level=info msg="RemovePodSandbox for \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\"" May 14 00:01:30.435712 containerd[1514]: time="2025-05-14T00:01:30.435076539Z" level=info msg="Forcibly stopping sandbox \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\"" May 14 00:01:30.435712 containerd[1514]: time="2025-05-14T00:01:30.435191977Z" level=info msg="TearDown network for sandbox \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\" successfully" May 14 00:01:30.557230 containerd[1514]: time="2025-05-14T00:01:30.552361122Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:30.557230 containerd[1514]: time="2025-05-14T00:01:30.553388508Z" level=info msg="RemovePodSandbox \"bc2a454568bf86469e931f59c54e7abd0fa1f150d50a56a12771400b2ec051dc\" returns successfully" May 14 00:01:30.557230 containerd[1514]: time="2025-05-14T00:01:30.554946748Z" level=info msg="StopPodSandbox for \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\"" May 14 00:01:30.557230 containerd[1514]: time="2025-05-14T00:01:30.555107938Z" level=info msg="TearDown network for sandbox \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\" successfully" May 14 00:01:30.557230 containerd[1514]: time="2025-05-14T00:01:30.555122314Z" level=info msg="StopPodSandbox for \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\" returns successfully" May 14 00:01:30.557230 containerd[1514]: time="2025-05-14T00:01:30.555441537Z" level=info msg="RemovePodSandbox for \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\"" May 14 00:01:30.557230 containerd[1514]: time="2025-05-14T00:01:30.555479224Z" level=info msg="Forcibly stopping sandbox \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\"" May 14 00:01:30.557230 containerd[1514]: time="2025-05-14T00:01:30.555561944Z" level=info msg="TearDown network for sandbox \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\" successfully" May 14 00:01:30.943716 containerd[1514]: time="2025-05-14T00:01:30.943431061Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:30.943716 containerd[1514]: time="2025-05-14T00:01:30.943536111Z" level=info msg="RemovePodSandbox \"a50e66f10e70d7e3b091714c2f9482e715bdd3eab0f8747ec9fb5bf1ceca9a22\" returns successfully" May 14 00:01:30.944292 containerd[1514]: time="2025-05-14T00:01:30.944232722Z" level=info msg="StopPodSandbox for \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\"" May 14 00:01:30.944457 containerd[1514]: time="2025-05-14T00:01:30.944404220Z" level=info msg="TearDown network for sandbox \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\" successfully" May 14 00:01:30.944457 containerd[1514]: time="2025-05-14T00:01:30.944428904Z" level=info msg="StopPodSandbox for \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\" returns successfully" May 14 00:01:30.948072 containerd[1514]: time="2025-05-14T00:01:30.948028442Z" level=info msg="RemovePodSandbox for \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\"" May 14 00:01:30.948072 containerd[1514]: time="2025-05-14T00:01:30.948064296Z" level=info msg="Forcibly stopping sandbox \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\"" May 14 00:01:30.948361 containerd[1514]: time="2025-05-14T00:01:30.948239010Z" level=info msg="TearDown network for sandbox \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\" successfully" May 14 00:01:30.964708 containerd[1514]: time="2025-05-14T00:01:30.964607749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:30.964917 containerd[1514]: time="2025-05-14T00:01:30.964728216Z" level=info msg="RemovePodSandbox \"5153e471c8ee61334cdd625b198c80fe354b8f46b0ecd03d8a954ed630e292dd\" returns successfully" May 14 00:01:30.965948 containerd[1514]: time="2025-05-14T00:01:30.965436208Z" level=info msg="StopPodSandbox for \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\"" May 14 00:01:30.965948 containerd[1514]: time="2025-05-14T00:01:30.965562284Z" level=info msg="TearDown network for sandbox \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\" successfully" May 14 00:01:30.965948 containerd[1514]: time="2025-05-14T00:01:30.965572883Z" level=info msg="StopPodSandbox for \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\" returns successfully" May 14 00:01:30.966222 containerd[1514]: time="2025-05-14T00:01:30.966167482Z" level=info msg="RemovePodSandbox for \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\"" May 14 00:01:30.966267 containerd[1514]: time="2025-05-14T00:01:30.966224284Z" level=info msg="Forcibly stopping sandbox \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\"" May 14 00:01:30.966405 containerd[1514]: time="2025-05-14T00:01:30.966344579Z" level=info msg="TearDown network for sandbox \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\" successfully" May 14 00:01:31.010546 containerd[1514]: time="2025-05-14T00:01:31.010449946Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.010546 containerd[1514]: time="2025-05-14T00:01:31.010533976Z" level=info msg="RemovePodSandbox \"5ff8c9fd97c6518fdd08652e3ba460c6027a4eac04de742ce360c823f86d6834\" returns successfully" May 14 00:01:31.011158 containerd[1514]: time="2025-05-14T00:01:31.011096120Z" level=info msg="StopPodSandbox for \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\"" May 14 00:01:31.012754 containerd[1514]: time="2025-05-14T00:01:31.011247011Z" level=info msg="TearDown network for sandbox \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\" successfully" May 14 00:01:31.012754 containerd[1514]: time="2025-05-14T00:01:31.011272918Z" level=info msg="StopPodSandbox for \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\" returns successfully" May 14 00:01:31.012754 containerd[1514]: time="2025-05-14T00:01:31.011539078Z" level=info msg="RemovePodSandbox for \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\"" May 14 00:01:31.012754 containerd[1514]: time="2025-05-14T00:01:31.011561949Z" level=info msg="Forcibly stopping sandbox \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\"" May 14 00:01:31.012754 containerd[1514]: time="2025-05-14T00:01:31.011651891Z" level=info msg="TearDown network for sandbox \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\" successfully" May 14 00:01:31.101304 containerd[1514]: time="2025-05-14T00:01:31.101081994Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.101304 containerd[1514]: time="2025-05-14T00:01:31.101173428Z" level=info msg="RemovePodSandbox \"7ed749f47db0245d2e0a4f7af5389ddc9dda8bc8f2c01a832d32e4e23b2740d1\" returns successfully" May 14 00:01:31.105411 containerd[1514]: time="2025-05-14T00:01:31.102801381Z" level=info msg="StopPodSandbox for \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\"" May 14 00:01:31.105411 containerd[1514]: time="2025-05-14T00:01:31.102950981Z" level=info msg="TearDown network for sandbox \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\" successfully" May 14 00:01:31.105411 containerd[1514]: time="2025-05-14T00:01:31.102963053Z" level=info msg="StopPodSandbox for \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\" returns successfully" May 14 00:01:31.105411 containerd[1514]: time="2025-05-14T00:01:31.103335744Z" level=info msg="RemovePodSandbox for \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\"" May 14 00:01:31.105411 containerd[1514]: time="2025-05-14T00:01:31.103354158Z" level=info msg="Forcibly stopping sandbox \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\"" May 14 00:01:31.105411 containerd[1514]: time="2025-05-14T00:01:31.103440844Z" level=info msg="TearDown network for sandbox \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\" successfully" May 14 00:01:31.121438 containerd[1514]: time="2025-05-14T00:01:31.121353633Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.121771 containerd[1514]: time="2025-05-14T00:01:31.121625211Z" level=info msg="RemovePodSandbox \"e2092947f49d25df9acfe4e475034b82c94ff3818625b7411df3f93621b96e4f\" returns successfully" May 14 00:01:31.122325 containerd[1514]: time="2025-05-14T00:01:31.122163191Z" level=info msg="StopPodSandbox for \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\"" May 14 00:01:31.122325 containerd[1514]: time="2025-05-14T00:01:31.122267338Z" level=info msg="TearDown network for sandbox \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\" successfully" May 14 00:01:31.122325 containerd[1514]: time="2025-05-14T00:01:31.122277618Z" level=info msg="StopPodSandbox for \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\" returns successfully" May 14 00:01:31.127829 containerd[1514]: time="2025-05-14T00:01:31.125167535Z" level=info msg="RemovePodSandbox for \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\"" May 14 00:01:31.127829 containerd[1514]: time="2025-05-14T00:01:31.125191468Z" level=info msg="Forcibly stopping sandbox \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\"" May 14 00:01:31.127829 containerd[1514]: time="2025-05-14T00:01:31.125262837Z" level=info msg="TearDown network for sandbox \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\" successfully" May 14 00:01:31.150980 containerd[1514]: time="2025-05-14T00:01:31.150893943Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.151154 containerd[1514]: time="2025-05-14T00:01:31.151002940Z" level=info msg="RemovePodSandbox \"fded8c2ab41ef0ba621ee0a8f083bbb59adf971e6c7a0566ae097d148124b88d\" returns successfully" May 14 00:01:31.155547 containerd[1514]: time="2025-05-14T00:01:31.155512216Z" level=info msg="StopPodSandbox for \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\"" May 14 00:01:31.155851 containerd[1514]: time="2025-05-14T00:01:31.155806626Z" level=info msg="TearDown network for sandbox \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\" successfully" May 14 00:01:31.155851 containerd[1514]: time="2025-05-14T00:01:31.155825230Z" level=info msg="StopPodSandbox for \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\" returns successfully" May 14 00:01:31.156151 containerd[1514]: time="2025-05-14T00:01:31.156113038Z" level=info msg="RemovePodSandbox for \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\"" May 14 00:01:31.156151 containerd[1514]: time="2025-05-14T00:01:31.156140538Z" level=info msg="Forcibly stopping sandbox \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\"" May 14 00:01:31.156272 containerd[1514]: time="2025-05-14T00:01:31.156213380Z" level=info msg="TearDown network for sandbox \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\" successfully" May 14 00:01:31.169430 containerd[1514]: time="2025-05-14T00:01:31.169357756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.169430 containerd[1514]: time="2025-05-14T00:01:31.169433191Z" level=info msg="RemovePodSandbox \"3c87715a71350bd4a01f089067c08707893b6abc6c90a878d1ced434f2e7f0e1\" returns successfully" May 14 00:01:31.170521 containerd[1514]: time="2025-05-14T00:01:31.170458378Z" level=info msg="StopPodSandbox for \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\"" May 14 00:01:31.170807 containerd[1514]: time="2025-05-14T00:01:31.170572464Z" level=info msg="TearDown network for sandbox \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\" successfully" May 14 00:01:31.170807 containerd[1514]: time="2025-05-14T00:01:31.170589535Z" level=info msg="StopPodSandbox for \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\" returns successfully" May 14 00:01:31.170976 containerd[1514]: time="2025-05-14T00:01:31.170933876Z" level=info msg="RemovePodSandbox for \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\"" May 14 00:01:31.170976 containerd[1514]: time="2025-05-14T00:01:31.170961445Z" level=info msg="Forcibly stopping sandbox \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\"" May 14 00:01:31.171082 containerd[1514]: time="2025-05-14T00:01:31.171032173Z" level=info msg="TearDown network for sandbox \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\" successfully" May 14 00:01:31.184602 containerd[1514]: time="2025-05-14T00:01:31.184312503Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.184602 containerd[1514]: time="2025-05-14T00:01:31.184391035Z" level=info msg="RemovePodSandbox \"5f56112abc9fc3ca7322a0e768cadf4b3aef097ac529d407d43a9d9667691957\" returns successfully" May 14 00:01:31.184964 containerd[1514]: time="2025-05-14T00:01:31.184863737Z" level=info msg="StopPodSandbox for \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\"" May 14 00:01:31.185133 containerd[1514]: time="2025-05-14T00:01:31.184973124Z" level=info msg="TearDown network for sandbox \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\" successfully" May 14 00:01:31.185133 containerd[1514]: time="2025-05-14T00:01:31.184983422Z" level=info msg="StopPodSandbox for \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\" returns successfully" May 14 00:01:31.185231 containerd[1514]: time="2025-05-14T00:01:31.185212545Z" level=info msg="RemovePodSandbox for \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\"" May 14 00:01:31.185267 containerd[1514]: time="2025-05-14T00:01:31.185230338Z" level=info msg="Forcibly stopping sandbox \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\"" May 14 00:01:31.185368 containerd[1514]: time="2025-05-14T00:01:31.185319398Z" level=info msg="TearDown network for sandbox \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\" successfully" May 14 00:01:31.201669 containerd[1514]: time="2025-05-14T00:01:31.201535129Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.202320 containerd[1514]: time="2025-05-14T00:01:31.201940450Z" level=info msg="RemovePodSandbox \"56eda22832f68724cb3dbe0858a58786d27cf26c5f9e8dafe8d887c586839ed7\" returns successfully" May 14 00:01:31.202504 containerd[1514]: time="2025-05-14T00:01:31.202485912Z" level=info msg="StopPodSandbox for \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\"" May 14 00:01:31.202772 containerd[1514]: time="2025-05-14T00:01:31.202755929Z" level=info msg="TearDown network for sandbox \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\" successfully" May 14 00:01:31.202947 containerd[1514]: time="2025-05-14T00:01:31.202913613Z" level=info msg="StopPodSandbox for \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\" returns successfully" May 14 00:01:31.209593 containerd[1514]: time="2025-05-14T00:01:31.203340442Z" level=info msg="RemovePodSandbox for \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\"" May 14 00:01:31.209593 containerd[1514]: time="2025-05-14T00:01:31.203362272Z" level=info msg="Forcibly stopping sandbox \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\"" May 14 00:01:31.209593 containerd[1514]: time="2025-05-14T00:01:31.203439731Z" level=info msg="TearDown network for sandbox \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\" successfully" May 14 00:01:31.237804 containerd[1514]: time="2025-05-14T00:01:31.233219804Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.237804 containerd[1514]: time="2025-05-14T00:01:31.233292795Z" level=info msg="RemovePodSandbox \"9d75d1c4e109dd828996b475a7d4dc5d2cc445035adf6750cc05772548793cfd\" returns successfully" May 14 00:01:31.237804 containerd[1514]: time="2025-05-14T00:01:31.233971218Z" level=info msg="StopPodSandbox for \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\"" May 14 00:01:31.237804 containerd[1514]: time="2025-05-14T00:01:31.234142537Z" level=info msg="TearDown network for sandbox \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\" successfully" May 14 00:01:31.237804 containerd[1514]: time="2025-05-14T00:01:31.234156753Z" level=info msg="StopPodSandbox for \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\" returns successfully" May 14 00:01:31.237804 containerd[1514]: time="2025-05-14T00:01:31.234506773Z" level=info msg="RemovePodSandbox for \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\"" May 14 00:01:31.237804 containerd[1514]: time="2025-05-14T00:01:31.234527841Z" level=info msg="Forcibly stopping sandbox \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\"" May 14 00:01:31.237804 containerd[1514]: time="2025-05-14T00:01:31.234609188Z" level=info msg="TearDown network for sandbox \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\" successfully" May 14 00:01:31.261411 containerd[1514]: time="2025-05-14T00:01:31.261172094Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.261411 containerd[1514]: time="2025-05-14T00:01:31.261264660Z" level=info msg="RemovePodSandbox \"b4380fe2399a0ffdd3b39405d8f44d44bd1b9a4ee632f479c3958de4626fb705\" returns successfully" May 14 00:01:31.263192 containerd[1514]: time="2025-05-14T00:01:31.262973831Z" level=info msg="StopPodSandbox for \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\"" May 14 00:01:31.263685 containerd[1514]: time="2025-05-14T00:01:31.263275263Z" level=info msg="TearDown network for sandbox \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\" successfully" May 14 00:01:31.263685 containerd[1514]: time="2025-05-14T00:01:31.263302232Z" level=info msg="StopPodSandbox for \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\" returns successfully" May 14 00:01:31.265431 containerd[1514]: time="2025-05-14T00:01:31.264579985Z" level=info msg="RemovePodSandbox for \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\"" May 14 00:01:31.265431 containerd[1514]: time="2025-05-14T00:01:31.264648849Z" level=info msg="Forcibly stopping sandbox \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\"" May 14 00:01:31.265539 containerd[1514]: time="2025-05-14T00:01:31.265410451Z" level=info msg="TearDown network for sandbox \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\" successfully" May 14 00:01:31.296179 containerd[1514]: time="2025-05-14T00:01:31.293149286Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.296179 containerd[1514]: time="2025-05-14T00:01:31.293479442Z" level=info msg="RemovePodSandbox \"537e48c11bd0c7df0af2d565c33274a5d1005847da8bc6c2a9ca15755d31570d\" returns successfully" May 14 00:01:31.296179 containerd[1514]: time="2025-05-14T00:01:31.294336887Z" level=info msg="StopPodSandbox for \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\"" May 14 00:01:31.296179 containerd[1514]: time="2025-05-14T00:01:31.294485955Z" level=info msg="TearDown network for sandbox \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\" successfully" May 14 00:01:31.296179 containerd[1514]: time="2025-05-14T00:01:31.294504608Z" level=info msg="StopPodSandbox for \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\" returns successfully" May 14 00:01:31.299772 containerd[1514]: time="2025-05-14T00:01:31.299684894Z" level=info msg="RemovePodSandbox for \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\"" May 14 00:01:31.299772 containerd[1514]: time="2025-05-14T00:01:31.299757365Z" level=info msg="Forcibly stopping sandbox \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\"" May 14 00:01:31.303770 containerd[1514]: time="2025-05-14T00:01:31.302561628Z" level=info msg="TearDown network for sandbox \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\" successfully" May 14 00:01:31.330996 containerd[1514]: time="2025-05-14T00:01:31.330888973Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.330996 containerd[1514]: time="2025-05-14T00:01:31.330989744Z" level=info msg="RemovePodSandbox \"55ae23add9ba91189a4e1a09b1d3bbff71ced02668bd6504d560d8b5ba83ee60\" returns successfully" May 14 00:01:31.340315 containerd[1514]: time="2025-05-14T00:01:31.339987128Z" level=info msg="StopPodSandbox for \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\"" May 14 00:01:31.340315 containerd[1514]: time="2025-05-14T00:01:31.340170348Z" level=info msg="TearDown network for sandbox \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\" successfully" May 14 00:01:31.340315 containerd[1514]: time="2025-05-14T00:01:31.340231759Z" level=info msg="StopPodSandbox for \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\" returns successfully" May 14 00:01:31.341911 containerd[1514]: time="2025-05-14T00:01:31.340755532Z" level=info msg="RemovePodSandbox for \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\"" May 14 00:01:31.341911 containerd[1514]: time="2025-05-14T00:01:31.340795444Z" level=info msg="Forcibly stopping sandbox \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\"" May 14 00:01:31.341911 containerd[1514]: time="2025-05-14T00:01:31.340907587Z" level=info msg="TearDown network for sandbox \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\" successfully" May 14 00:01:31.461731 containerd[1514]: time="2025-05-14T00:01:31.461545846Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.461956 containerd[1514]: time="2025-05-14T00:01:31.461931520Z" level=info msg="RemovePodSandbox \"5c690af5ccbc924073b6ba5b55538367b893aa0ed7d5016fd2ea824053f8d1f4\" returns successfully" May 14 00:01:31.462522 containerd[1514]: time="2025-05-14T00:01:31.462499214Z" level=info msg="StopPodSandbox for \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\"" May 14 00:01:31.462739 containerd[1514]: time="2025-05-14T00:01:31.462719701Z" level=info msg="TearDown network for sandbox \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\" successfully" May 14 00:01:31.462809 containerd[1514]: time="2025-05-14T00:01:31.462793103Z" level=info msg="StopPodSandbox for \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\" returns successfully" May 14 00:01:31.463105 containerd[1514]: time="2025-05-14T00:01:31.463083586Z" level=info msg="RemovePodSandbox for \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\"" May 14 00:01:31.463196 containerd[1514]: time="2025-05-14T00:01:31.463180902Z" level=info msg="Forcibly stopping sandbox \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\"" May 14 00:01:31.463358 containerd[1514]: time="2025-05-14T00:01:31.463314793Z" level=info msg="TearDown network for sandbox \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\" successfully" May 14 00:01:31.812925 containerd[1514]: time="2025-05-14T00:01:31.809556758Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.812925 containerd[1514]: time="2025-05-14T00:01:31.809649846Z" level=info msg="RemovePodSandbox \"6eef35ed3243a5fb634cedb659eea2e85da49d8b06e02941ec9a0b24d1b66d9e\" returns successfully" May 14 00:01:31.812925 containerd[1514]: time="2025-05-14T00:01:31.810196451Z" level=info msg="StopPodSandbox for \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\"" May 14 00:01:31.812925 containerd[1514]: time="2025-05-14T00:01:31.810322207Z" level=info msg="TearDown network for sandbox \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\" successfully" May 14 00:01:31.812925 containerd[1514]: time="2025-05-14T00:01:31.810334179Z" level=info msg="StopPodSandbox for \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\" returns successfully" May 14 00:01:31.812925 containerd[1514]: time="2025-05-14T00:01:31.810646051Z" level=info msg="RemovePodSandbox for \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\"" May 14 00:01:31.812925 containerd[1514]: time="2025-05-14T00:01:31.810666758Z" level=info msg="Forcibly stopping sandbox \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\"" May 14 00:01:31.812925 containerd[1514]: time="2025-05-14T00:01:31.810749788Z" level=info msg="TearDown network for sandbox \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\" successfully" May 14 00:01:31.859666 containerd[1514]: time="2025-05-14T00:01:31.858145595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.859666 containerd[1514]: time="2025-05-14T00:01:31.858274928Z" level=info msg="RemovePodSandbox \"95b7549369ae5eb8d47ce2e4bb960e50049a0147b94d0bab48064d7dba2418a3\" returns successfully" May 14 00:01:31.859666 containerd[1514]: time="2025-05-14T00:01:31.858803902Z" level=info msg="StopPodSandbox for \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\"" May 14 00:01:31.859666 containerd[1514]: time="2025-05-14T00:01:31.859541741Z" level=info msg="TearDown network for sandbox \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\" successfully" May 14 00:01:31.859666 containerd[1514]: time="2025-05-14T00:01:31.859586852Z" level=info msg="StopPodSandbox for \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\" returns successfully" May 14 00:01:31.860024 containerd[1514]: time="2025-05-14T00:01:31.859941301Z" level=info msg="RemovePodSandbox for \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\"" May 14 00:01:31.860024 containerd[1514]: time="2025-05-14T00:01:31.859977917Z" level=info msg="Forcibly stopping sandbox \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\"" May 14 00:01:31.860923 containerd[1514]: time="2025-05-14T00:01:31.860078768Z" level=info msg="TearDown network for sandbox \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\" successfully" May 14 00:01:31.883568 containerd[1514]: time="2025-05-14T00:01:31.882841918Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.883568 containerd[1514]: time="2025-05-14T00:01:31.882920208Z" level=info msg="RemovePodSandbox \"350207304c817727e4a46a415bd97b5570f1190c746ee0b5a03aeaeb3ba5b6e1\" returns successfully" May 14 00:01:31.886456 containerd[1514]: time="2025-05-14T00:01:31.883904903Z" level=info msg="StopPodSandbox for \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\"" May 14 00:01:31.886456 containerd[1514]: time="2025-05-14T00:01:31.884041328Z" level=info msg="TearDown network for sandbox \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\" successfully" May 14 00:01:31.886456 containerd[1514]: time="2025-05-14T00:01:31.884055253Z" level=info msg="StopPodSandbox for \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\" returns successfully" May 14 00:01:31.886456 containerd[1514]: time="2025-05-14T00:01:31.884282844Z" level=info msg="RemovePodSandbox for \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\"" May 14 00:01:31.886456 containerd[1514]: time="2025-05-14T00:01:31.884300566Z" level=info msg="Forcibly stopping sandbox \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\"" May 14 00:01:31.886456 containerd[1514]: time="2025-05-14T00:01:31.884378376Z" level=info msg="TearDown network for sandbox \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\" successfully" May 14 00:01:31.900386 containerd[1514]: time="2025-05-14T00:01:31.898310171Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 00:01:31.900386 containerd[1514]: time="2025-05-14T00:01:31.898376560Z" level=info msg="RemovePodSandbox \"204aca7e8811caa25cbc0e4ca9484f9e06527871260baa5fc5a231f03375f72a\" returns successfully" May 14 00:01:33.314178 systemd[1]: Started sshd@15-10.0.0.99:22-10.0.0.1:43180.service - OpenSSH per-connection server daemon (10.0.0.1:43180). May 14 00:01:33.377107 sshd[5518]: Accepted publickey for core from 10.0.0.1 port 43180 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:33.379459 sshd-session[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:33.386294 systemd-logind[1492]: New session 16 of user core. May 14 00:01:33.397100 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 00:01:33.537332 sshd[5520]: Connection closed by 10.0.0.1 port 43180 May 14 00:01:33.537757 sshd-session[5518]: pam_unix(sshd:session): session closed for user core May 14 00:01:33.542438 systemd[1]: sshd@15-10.0.0.99:22-10.0.0.1:43180.service: Deactivated successfully. May 14 00:01:33.544640 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:01:33.545823 systemd-logind[1492]: Session 16 logged out. Waiting for processes to exit. May 14 00:01:33.547977 systemd-logind[1492]: Removed session 16. May 14 00:01:33.830630 containerd[1514]: time="2025-05-14T00:01:33.830558159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:33.931338 containerd[1514]: time="2025-05-14T00:01:33.931235808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 14 00:01:33.946171 containerd[1514]: time="2025-05-14T00:01:33.946117044Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:33.963811 containerd[1514]: time="2025-05-14T00:01:33.963729221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:33.964503 containerd[1514]: time="2025-05-14T00:01:33.964452300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 6.699957188s" May 14 00:01:33.964503 containerd[1514]: time="2025-05-14T00:01:33.964496470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 00:01:33.966344 containerd[1514]: time="2025-05-14T00:01:33.966287336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 00:01:33.967189 containerd[1514]: time="2025-05-14T00:01:33.967140482Z" level=info msg="CreateContainer within sandbox \"6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 00:01:34.235600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3775445776.mount: Deactivated successfully. May 14 00:01:34.511353 containerd[1514]: time="2025-05-14T00:01:34.511187136Z" level=info msg="CreateContainer within sandbox \"6fe181b207c502b6f36855a2d01f5325c58802c750ecee141d93c3a9b56350c1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7868b2762acc7ce76f1a462bc8deb5890e11105dd11e5515e43e312f4c3d0df2\"" May 14 00:01:34.512103 containerd[1514]: time="2025-05-14T00:01:34.512072935Z" level=info msg="StartContainer for \"7868b2762acc7ce76f1a462bc8deb5890e11105dd11e5515e43e312f4c3d0df2\"" May 14 00:01:34.545973 systemd[1]: Started cri-containerd-7868b2762acc7ce76f1a462bc8deb5890e11105dd11e5515e43e312f4c3d0df2.scope - libcontainer container 7868b2762acc7ce76f1a462bc8deb5890e11105dd11e5515e43e312f4c3d0df2. May 14 00:01:35.431352 containerd[1514]: time="2025-05-14T00:01:35.431250421Z" level=info msg="StartContainer for \"7868b2762acc7ce76f1a462bc8deb5890e11105dd11e5515e43e312f4c3d0df2\" returns successfully" May 14 00:01:35.591135 kubelet[2707]: I0514 00:01:35.590810 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-897df95dc-k8qvz" podStartSLOduration=36.889132687 podStartE2EDuration="43.590786562s" podCreationTimestamp="2025-05-14 00:00:52 +0000 UTC" firstStartedPulling="2025-05-14 00:01:27.26395456 +0000 UTC m=+59.208185465" lastFinishedPulling="2025-05-14 00:01:33.965608425 +0000 UTC m=+65.909839340" observedRunningTime="2025-05-14 00:01:35.590224428 +0000 UTC m=+67.534455333" watchObservedRunningTime="2025-05-14 00:01:35.590786562 +0000 UTC m=+67.535017467" May 14 00:01:36.579812 kubelet[2707]: I0514 00:01:36.579777 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:37.485474 containerd[1514]: time="2025-05-14T00:01:37.485383200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:37.638337 containerd[1514]: time="2025-05-14T00:01:37.638230070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 14 00:01:37.728008 containerd[1514]: time="2025-05-14T00:01:37.727937882Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:37.746868 containerd[1514]: time="2025-05-14T00:01:37.746720322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:37.747721 containerd[1514]: time="2025-05-14T00:01:37.747651428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.781320211s" May 14 00:01:37.747721 containerd[1514]: time="2025-05-14T00:01:37.747719072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 14 00:01:37.748912 containerd[1514]: time="2025-05-14T00:01:37.748862214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 00:01:37.758695 containerd[1514]: time="2025-05-14T00:01:37.758616328Z" level=info msg="CreateContainer within sandbox \"12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 00:01:37.782428 containerd[1514]: time="2025-05-14T00:01:37.782372235Z" level=info msg="CreateContainer within sandbox \"12131ebf0e29745f861892d752ae721083dd0bdbdc1354a5297f72c2c9a19c97\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a3d2ed861a2ed6adf0feb6e71d1d515a351f722f2870039d1049a63c66dbc600\"" May 14 00:01:37.783036 containerd[1514]: time="2025-05-14T00:01:37.783004562Z" level=info msg="StartContainer for \"a3d2ed861a2ed6adf0feb6e71d1d515a351f722f2870039d1049a63c66dbc600\"" May 14 00:01:37.816024 systemd[1]: Started cri-containerd-a3d2ed861a2ed6adf0feb6e71d1d515a351f722f2870039d1049a63c66dbc600.scope - libcontainer container a3d2ed861a2ed6adf0feb6e71d1d515a351f722f2870039d1049a63c66dbc600. May 14 00:01:37.873644 containerd[1514]: time="2025-05-14T00:01:37.873568000Z" level=info msg="StartContainer for \"a3d2ed861a2ed6adf0feb6e71d1d515a351f722f2870039d1049a63c66dbc600\" returns successfully" May 14 00:01:38.155421 containerd[1514]: time="2025-05-14T00:01:38.155360582Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:38.161429 containerd[1514]: time="2025-05-14T00:01:38.160744509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 00:01:38.164555 containerd[1514]: time="2025-05-14T00:01:38.164489511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 415.584078ms" May 14 00:01:38.164555 containerd[1514]: time="2025-05-14T00:01:38.164547868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 00:01:38.166166 containerd[1514]: time="2025-05-14T00:01:38.166144016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 00:01:38.166996 containerd[1514]: time="2025-05-14T00:01:38.166945537Z" level=info msg="CreateContainer within sandbox \"75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 00:01:38.192848 containerd[1514]: time="2025-05-14T00:01:38.192779506Z" level=info msg="CreateContainer within sandbox \"75f4cf74eba5e2b746bbd60f0622b505916f63c596cff04d3f57b03736cd1192\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8453a54ce7ace56cffd03a59e29cbfda3aa197131f9138dfa63bd2e39fa12d49\"" May 14 00:01:38.193500 containerd[1514]: time="2025-05-14T00:01:38.193449165Z" level=info msg="StartContainer for \"8453a54ce7ace56cffd03a59e29cbfda3aa197131f9138dfa63bd2e39fa12d49\"" May 14 00:01:38.227979 systemd[1]: Started cri-containerd-8453a54ce7ace56cffd03a59e29cbfda3aa197131f9138dfa63bd2e39fa12d49.scope - libcontainer container 8453a54ce7ace56cffd03a59e29cbfda3aa197131f9138dfa63bd2e39fa12d49. May 14 00:01:38.375599 containerd[1514]: time="2025-05-14T00:01:38.375543335Z" level=info msg="StartContainer for \"8453a54ce7ace56cffd03a59e29cbfda3aa197131f9138dfa63bd2e39fa12d49\" returns successfully" May 14 00:01:38.561054 systemd[1]: Started sshd@16-10.0.0.99:22-10.0.0.1:52142.service - OpenSSH per-connection server daemon (10.0.0.1:52142). May 14 00:01:38.617918 sshd[5668]: Accepted publickey for core from 10.0.0.1 port 52142 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:38.619273 sshd-session[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:38.623813 kubelet[2707]: I0514 00:01:38.623686 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-897df95dc-b4gq4" podStartSLOduration=35.871979242 podStartE2EDuration="46.623646636s" podCreationTimestamp="2025-05-14 00:00:52 +0000 UTC" firstStartedPulling="2025-05-14 00:01:27.413711939 +0000 UTC m=+59.357942844" lastFinishedPulling="2025-05-14 00:01:38.165379323 +0000 UTC m=+70.109610238" observedRunningTime="2025-05-14 00:01:38.600546443 +0000 UTC m=+70.544777348" watchObservedRunningTime="2025-05-14 00:01:38.623646636 +0000 UTC m=+70.567877541" May 14 00:01:38.640386 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 00:01:38.641061 systemd-logind[1492]: New session 17 of user core. May 14 00:01:38.675399 kubelet[2707]: I0514 00:01:38.675317 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54f4f89fbf-94z7j" podStartSLOduration=35.275870202 podStartE2EDuration="45.67529639s" podCreationTimestamp="2025-05-14 00:00:53 +0000 UTC" firstStartedPulling="2025-05-14 00:01:27.349208178 +0000 UTC m=+59.293439083" lastFinishedPulling="2025-05-14 00:01:37.748634366 +0000 UTC m=+69.692865271" observedRunningTime="2025-05-14 00:01:38.624805211 +0000 UTC m=+70.569036117" watchObservedRunningTime="2025-05-14 00:01:38.67529639 +0000 UTC m=+70.619527296" May 14 00:01:38.798874 sshd[5688]: Connection closed by 10.0.0.1 port 52142 May 14 00:01:38.799359 sshd-session[5668]: pam_unix(sshd:session): session closed for user core May 14 00:01:38.804707 systemd[1]: sshd@16-10.0.0.99:22-10.0.0.1:52142.service: Deactivated successfully. May 14 00:01:38.807437 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:01:38.808287 systemd-logind[1492]: Session 17 logged out. Waiting for processes to exit. May 14 00:01:38.809364 systemd-logind[1492]: Removed session 17. May 14 00:01:41.565533 containerd[1514]: time="2025-05-14T00:01:41.565380302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:41.567300 containerd[1514]: time="2025-05-14T00:01:41.567223717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 14 00:01:41.569187 containerd[1514]: time="2025-05-14T00:01:41.569131921Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:41.572550 containerd[1514]: time="2025-05-14T00:01:41.572469965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:41.573218 containerd[1514]: time="2025-05-14T00:01:41.573167784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 3.406882198s" May 14 00:01:41.573218 containerd[1514]: time="2025-05-14T00:01:41.573211134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 14 00:01:41.582302 containerd[1514]: time="2025-05-14T00:01:41.582206847Z" level=info msg="CreateContainer within sandbox \"e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 00:01:41.615743 containerd[1514]: time="2025-05-14T00:01:41.615655657Z" level=info msg="CreateContainer within sandbox \"e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c2e11df986e40e78ec4ef919efeeb608cfbb0d0bac21237bb9d353642452497a\"" May 14 00:01:41.616454 containerd[1514]: time="2025-05-14T00:01:41.616402477Z" level=info msg="StartContainer for \"c2e11df986e40e78ec4ef919efeeb608cfbb0d0bac21237bb9d353642452497a\"" May 14 00:01:41.648112 systemd[1]: run-containerd-runc-k8s.io-c2e11df986e40e78ec4ef919efeeb608cfbb0d0bac21237bb9d353642452497a-runc.Mqxw52.mount: Deactivated successfully. May 14 00:01:41.660893 systemd[1]: Started cri-containerd-c2e11df986e40e78ec4ef919efeeb608cfbb0d0bac21237bb9d353642452497a.scope - libcontainer container c2e11df986e40e78ec4ef919efeeb608cfbb0d0bac21237bb9d353642452497a. May 14 00:01:43.066560 containerd[1514]: time="2025-05-14T00:01:43.066485755Z" level=info msg="StartContainer for \"c2e11df986e40e78ec4ef919efeeb608cfbb0d0bac21237bb9d353642452497a\" returns successfully" May 14 00:01:43.067616 containerd[1514]: time="2025-05-14T00:01:43.067589312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 00:01:43.812831 systemd[1]: Started sshd@17-10.0.0.99:22-10.0.0.1:52158.service - OpenSSH per-connection server daemon (10.0.0.1:52158). May 14 00:01:43.979269 sshd[5774]: Accepted publickey for core from 10.0.0.1 port 52158 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:43.981021 sshd-session[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:43.985282 systemd-logind[1492]: New session 18 of user core. May 14 00:01:43.992842 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 00:01:44.133235 sshd[5776]: Connection closed by 10.0.0.1 port 52158 May 14 00:01:44.133657 sshd-session[5774]: pam_unix(sshd:session): session closed for user core May 14 00:01:44.138583 systemd[1]: sshd@17-10.0.0.99:22-10.0.0.1:52158.service: Deactivated successfully. May 14 00:01:44.141077 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:01:44.141848 systemd-logind[1492]: Session 18 logged out. Waiting for processes to exit. May 14 00:01:44.143242 systemd-logind[1492]: Removed session 18. May 14 00:01:46.081785 containerd[1514]: time="2025-05-14T00:01:46.081668348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:46.083244 containerd[1514]: time="2025-05-14T00:01:46.083092272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 14 00:01:46.085462 containerd[1514]: time="2025-05-14T00:01:46.085379947Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:46.088531 containerd[1514]: time="2025-05-14T00:01:46.088462865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:01:46.089389 containerd[1514]: time="2025-05-14T00:01:46.089329310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 3.021699193s" May 14 00:01:46.089389 containerd[1514]: time="2025-05-14T00:01:46.089384744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 14 00:01:46.092133 containerd[1514]: time="2025-05-14T00:01:46.092072785Z" level=info msg="CreateContainer within sandbox \"e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 00:01:46.113189 containerd[1514]: time="2025-05-14T00:01:46.113111508Z" level=info msg="CreateContainer within sandbox \"e9678490ac7db543ddd97c51f72b79bca0040f003f255ae33e9b8858f99f976e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ee9d84b16a83bedbd49ce42c46f2fd2119682ac62bfe1a9c8158b70618f577a3\"" May 14 00:01:46.113847 containerd[1514]: time="2025-05-14T00:01:46.113800993Z" level=info msg="StartContainer for \"ee9d84b16a83bedbd49ce42c46f2fd2119682ac62bfe1a9c8158b70618f577a3\"" May 14 00:01:46.153013 systemd[1]: Started cri-containerd-ee9d84b16a83bedbd49ce42c46f2fd2119682ac62bfe1a9c8158b70618f577a3.scope - libcontainer container ee9d84b16a83bedbd49ce42c46f2fd2119682ac62bfe1a9c8158b70618f577a3. May 14 00:01:46.198883 containerd[1514]: time="2025-05-14T00:01:46.198821577Z" level=info msg="StartContainer for \"ee9d84b16a83bedbd49ce42c46f2fd2119682ac62bfe1a9c8158b70618f577a3\" returns successfully" May 14 00:01:46.262989 kubelet[2707]: I0514 00:01:46.262929 2707 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 00:01:46.262989 kubelet[2707]: I0514 00:01:46.262974 2707 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 00:01:49.146604 systemd[1]: Started sshd@18-10.0.0.99:22-10.0.0.1:53852.service - OpenSSH per-connection server daemon (10.0.0.1:53852). May 14 00:01:49.192435 sshd[5833]: Accepted publickey for core from 10.0.0.1 port 53852 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:49.195047 sshd-session[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:49.199343 systemd-logind[1492]: New session 19 of user core. May 14 00:01:49.206877 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 00:01:49.461217 sshd[5837]: Connection closed by 10.0.0.1 port 53852 May 14 00:01:49.461502 sshd-session[5833]: pam_unix(sshd:session): session closed for user core May 14 00:01:49.465535 systemd[1]: sshd@18-10.0.0.99:22-10.0.0.1:53852.service: Deactivated successfully. May 14 00:01:49.467773 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:01:49.468877 systemd-logind[1492]: Session 19 logged out. Waiting for processes to exit. May 14 00:01:49.469975 systemd-logind[1492]: Removed session 19. May 14 00:01:52.206178 kubelet[2707]: I0514 00:01:52.205200 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:01:52.264751 kubelet[2707]: I0514 00:01:52.263776 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s8nw4" podStartSLOduration=40.603073247 podStartE2EDuration="59.263753952s" podCreationTimestamp="2025-05-14 00:00:53 +0000 UTC" firstStartedPulling="2025-05-14 00:01:27.429584149 +0000 UTC m=+59.373815054" lastFinishedPulling="2025-05-14 00:01:46.090264844 +0000 UTC m=+78.034495759" observedRunningTime="2025-05-14 00:01:47.406119265 +0000 UTC m=+79.350350180" watchObservedRunningTime="2025-05-14 00:01:52.263753952 +0000 UTC m=+84.207984867" May 14 00:01:52.940283 kubelet[2707]: E0514 00:01:52.940235 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:01:54.485229 systemd[1]: Started sshd@19-10.0.0.99:22-10.0.0.1:53862.service - OpenSSH per-connection server daemon (10.0.0.1:53862). May 14 00:01:54.532491 sshd[5880]: Accepted publickey for core from 10.0.0.1 port 53862 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:54.534266 sshd-session[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:54.539193 systemd-logind[1492]: New session 20 of user core. May 14 00:01:54.547902 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 00:01:54.686026 sshd[5882]: Connection closed by 10.0.0.1 port 53862 May 14 00:01:54.686606 sshd-session[5880]: pam_unix(sshd:session): session closed for user core May 14 00:01:54.696395 systemd[1]: sshd@19-10.0.0.99:22-10.0.0.1:53862.service: Deactivated successfully. May 14 00:01:54.698553 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:01:54.700427 systemd-logind[1492]: Session 20 logged out. Waiting for processes to exit. May 14 00:01:54.707028 systemd[1]: Started sshd@20-10.0.0.99:22-10.0.0.1:53870.service - OpenSSH per-connection server daemon (10.0.0.1:53870). May 14 00:01:54.708526 systemd-logind[1492]: Removed session 20. May 14 00:01:54.746882 sshd[5895]: Accepted publickey for core from 10.0.0.1 port 53870 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:54.748528 sshd-session[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:54.754224 systemd-logind[1492]: New session 21 of user core. May 14 00:01:54.764030 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:01:55.202204 sshd[5898]: Connection closed by 10.0.0.1 port 53870 May 14 00:01:55.204204 sshd-session[5895]: pam_unix(sshd:session): session closed for user core May 14 00:01:55.214615 systemd[1]: sshd@20-10.0.0.99:22-10.0.0.1:53870.service: Deactivated successfully. May 14 00:01:55.218089 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:01:55.219661 systemd-logind[1492]: Session 21 logged out. Waiting for processes to exit. May 14 00:01:55.227394 systemd[1]: Started sshd@21-10.0.0.99:22-10.0.0.1:53874.service - OpenSSH per-connection server daemon (10.0.0.1:53874). May 14 00:01:55.228419 systemd-logind[1492]: Removed session 21. May 14 00:01:55.286185 sshd[5908]: Accepted publickey for core from 10.0.0.1 port 53874 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:01:55.288943 sshd-session[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:55.299572 systemd-logind[1492]: New session 22 of user core. May 14 00:01:55.307277 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 00:02:00.149302 kubelet[2707]: E0514 00:02:00.147332 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:00.149302 kubelet[2707]: E0514 00:02:00.147558 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:00.717418 sshd[5911]: Connection closed by 10.0.0.1 port 53874 May 14 00:02:00.718466 sshd-session[5908]: pam_unix(sshd:session): session closed for user core May 14 00:02:00.727292 systemd[1]: sshd@21-10.0.0.99:22-10.0.0.1:53874.service: Deactivated successfully. May 14 00:02:00.729309 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:02:00.729540 systemd[1]: session-22.scope: Consumed 888ms CPU time, 70.9M memory peak. May 14 00:02:00.730099 systemd-logind[1492]: Session 22 logged out. Waiting for processes to exit. May 14 00:02:00.738404 systemd[1]: Started sshd@22-10.0.0.99:22-10.0.0.1:56644.service - OpenSSH per-connection server daemon (10.0.0.1:56644). May 14 00:02:00.741569 systemd-logind[1492]: Removed session 22. May 14 00:02:00.805730 sshd[5950]: Accepted publickey for core from 10.0.0.1 port 56644 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:02:00.808071 sshd-session[5950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:00.815244 systemd-logind[1492]: New session 23 of user core. May 14 00:02:00.825046 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 00:02:01.147422 kubelet[2707]: E0514 00:02:01.147370 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:01.148566 kubelet[2707]: E0514 00:02:01.148510 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:02:01.280378 sshd[5953]: Connection closed by 10.0.0.1 port 56644 May 14 00:02:01.281452 sshd-session[5950]: pam_unix(sshd:session): session closed for user core May 14 00:02:01.295286 systemd[1]: sshd@22-10.0.0.99:22-10.0.0.1:56644.service: Deactivated successfully. May 14 00:02:01.297514 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:02:01.298294 systemd-logind[1492]: Session 23 logged out. Waiting for processes to exit. May 14 00:02:01.309528 systemd[1]: Started sshd@23-10.0.0.99:22-10.0.0.1:56656.service - OpenSSH per-connection server daemon (10.0.0.1:56656). May 14 00:02:01.311201 systemd-logind[1492]: Removed session 23. May 14 00:02:01.356350 sshd[5963]: Accepted publickey for core from 10.0.0.1 port 56656 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:02:01.358144 sshd-session[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:01.363861 systemd-logind[1492]: New session 24 of user core. May 14 00:02:01.370864 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 00:02:01.492718 sshd[5966]: Connection closed by 10.0.0.1 port 56656 May 14 00:02:01.494544 sshd-session[5963]: pam_unix(sshd:session): session closed for user core May 14 00:02:01.499390 systemd[1]: sshd@23-10.0.0.99:22-10.0.0.1:56656.service: Deactivated successfully. May 14 00:02:01.502066 systemd[1]: session-24.scope: Deactivated successfully. May 14 00:02:01.502944 systemd-logind[1492]: Session 24 logged out. Waiting for processes to exit. May 14 00:02:01.504026 systemd-logind[1492]: Removed session 24. May 14 00:02:06.515241 systemd[1]: Started sshd@24-10.0.0.99:22-10.0.0.1:53618.service - OpenSSH per-connection server daemon (10.0.0.1:53618). May 14 00:02:06.568298 sshd[5980]: Accepted publickey for core from 10.0.0.1 port 53618 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:02:06.571142 sshd-session[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:06.577378 systemd-logind[1492]: New session 25 of user core. May 14 00:02:06.587964 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 00:02:06.722158 sshd[5982]: Connection closed by 10.0.0.1 port 53618 May 14 00:02:06.722597 sshd-session[5980]: pam_unix(sshd:session): session closed for user core May 14 00:02:06.728189 systemd[1]: sshd@24-10.0.0.99:22-10.0.0.1:53618.service: Deactivated successfully. May 14 00:02:06.730558 systemd[1]: session-25.scope: Deactivated successfully. May 14 00:02:06.731334 systemd-logind[1492]: Session 25 logged out. Waiting for processes to exit. May 14 00:02:06.732402 systemd-logind[1492]: Removed session 25. May 14 00:02:11.741663 systemd[1]: Started sshd@25-10.0.0.99:22-10.0.0.1:53634.service - OpenSSH per-connection server daemon (10.0.0.1:53634). May 14 00:02:11.784072 sshd[6025]: Accepted publickey for core from 10.0.0.1 port 53634 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:02:11.785878 sshd-session[6025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:11.791269 systemd-logind[1492]: New session 26 of user core. May 14 00:02:11.796975 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 00:02:11.936804 sshd[6027]: Connection closed by 10.0.0.1 port 53634 May 14 00:02:11.937297 sshd-session[6025]: pam_unix(sshd:session): session closed for user core May 14 00:02:11.942004 systemd[1]: sshd@25-10.0.0.99:22-10.0.0.1:53634.service: Deactivated successfully. May 14 00:02:11.944592 systemd[1]: session-26.scope: Deactivated successfully. May 14 00:02:11.945564 systemd-logind[1492]: Session 26 logged out. Waiting for processes to exit. May 14 00:02:11.947184 systemd-logind[1492]: Removed session 26. May 14 00:02:16.951628 systemd[1]: Started sshd@26-10.0.0.99:22-10.0.0.1:40988.service - OpenSSH per-connection server daemon (10.0.0.1:40988). May 14 00:02:17.004867 sshd[6043]: Accepted publickey for core from 10.0.0.1 port 40988 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:02:17.007100 sshd-session[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:17.013654 systemd-logind[1492]: New session 27 of user core. May 14 00:02:17.020956 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 00:02:17.157330 sshd[6045]: Connection closed by 10.0.0.1 port 40988 May 14 00:02:17.157825 sshd-session[6043]: pam_unix(sshd:session): session closed for user core May 14 00:02:17.164030 systemd[1]: sshd@26-10.0.0.99:22-10.0.0.1:40988.service: Deactivated successfully. May 14 00:02:17.166806 systemd[1]: session-27.scope: Deactivated successfully. May 14 00:02:17.167887 systemd-logind[1492]: Session 27 logged out. Waiting for processes to exit. May 14 00:02:17.170059 systemd-logind[1492]: Removed session 27. May 14 00:02:22.171480 systemd[1]: Started sshd@27-10.0.0.99:22-10.0.0.1:40998.service - OpenSSH per-connection server daemon (10.0.0.1:40998). May 14 00:02:22.214474 sshd[6058]: Accepted publickey for core from 10.0.0.1 port 40998 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:02:22.216628 sshd-session[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:22.222150 systemd-logind[1492]: New session 28 of user core. May 14 00:02:22.225816 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 00:02:22.353626 sshd[6061]: Connection closed by 10.0.0.1 port 40998 May 14 00:02:22.354057 sshd-session[6058]: pam_unix(sshd:session): session closed for user core May 14 00:02:22.358753 systemd[1]: sshd@27-10.0.0.99:22-10.0.0.1:40998.service: Deactivated successfully. May 14 00:02:22.361579 systemd[1]: session-28.scope: Deactivated successfully. May 14 00:02:22.362484 systemd-logind[1492]: Session 28 logged out. Waiting for processes to exit. May 14 00:02:22.363494 systemd-logind[1492]: Removed session 28. May 14 00:02:27.378030 systemd[1]: Started sshd@28-10.0.0.99:22-10.0.0.1:38584.service - OpenSSH per-connection server daemon (10.0.0.1:38584). May 14 00:02:27.424875 sshd[6095]: Accepted publickey for core from 10.0.0.1 port 38584 ssh2: RSA SHA256:2Vys6akM3bwlRlykLnopippME/f1tLQVgpTw56u59EA May 14 00:02:27.426802 sshd-session[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:27.432635 systemd-logind[1492]: New session 29 of user core. May 14 00:02:27.437965 systemd[1]: Started session-29.scope - Session 29 of User core. May 14 00:02:27.560818 sshd[6099]: Connection closed by 10.0.0.1 port 38584 May 14 00:02:27.561201 sshd-session[6095]: pam_unix(sshd:session): session closed for user core May 14 00:02:27.565639 systemd[1]: sshd@28-10.0.0.99:22-10.0.0.1:38584.service: Deactivated successfully. May 14 00:02:27.568022 systemd[1]: session-29.scope: Deactivated successfully. May 14 00:02:27.568831 systemd-logind[1492]: Session 29 logged out. Waiting for processes to exit. May 14 00:02:27.570222 systemd-logind[1492]: Removed session 29. May 14 00:02:29.147210 kubelet[2707]: E0514 00:02:29.147160 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"