May 14 05:09:46.825254 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 03:42:56 -00 2025 May 14 05:09:46.825275 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bd5d20a479abde3485dc2e7b97a54e804895b9926289ae86f84794bef32a40f3 May 14 05:09:46.825286 kernel: BIOS-provided physical RAM map: May 14 05:09:46.825292 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 05:09:46.825299 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 05:09:46.825305 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 05:09:46.825328 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 14 05:09:46.825335 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 05:09:46.825344 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 05:09:46.825350 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 05:09:46.825356 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 14 05:09:46.825363 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 05:09:46.825369 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 05:09:46.825376 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 05:09:46.825386 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 05:09:46.825393 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 05:09:46.825400 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 14 05:09:46.825407 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 14 05:09:46.825414 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 14 05:09:46.825421 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 14 05:09:46.825428 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 05:09:46.825435 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 05:09:46.825442 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 05:09:46.825448 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 05:09:46.825455 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 05:09:46.825464 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 05:09:46.825471 kernel: NX (Execute Disable) protection: active May 14 05:09:46.825478 kernel: APIC: Static calls initialized May 14 05:09:46.825485 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 14 05:09:46.825492 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 14 05:09:46.825499 kernel: extended physical RAM map: May 14 05:09:46.825506 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 05:09:46.825513 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 05:09:46.825520 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 05:09:46.825527 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 14 05:09:46.825534 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 05:09:46.825543 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 05:09:46.825550 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 05:09:46.825557 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 14 05:09:46.825564 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 14 05:09:46.825574 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 14 05:09:46.825581 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 14 05:09:46.825590 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 14 05:09:46.825598 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 05:09:46.825605 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 05:09:46.825612 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 05:09:46.825619 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 05:09:46.825627 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 05:09:46.825634 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 14 05:09:46.825641 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 14 05:09:46.825648 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 14 05:09:46.825657 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 14 05:09:46.825664 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 05:09:46.825671 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 05:09:46.825678 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 05:09:46.825686 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 05:09:46.825693 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 05:09:46.825700 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 05:09:46.825707 kernel: efi: EFI v2.7 by EDK II May 14 05:09:46.825714 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 14 05:09:46.825721 kernel: random: crng init done May 14 05:09:46.825729 kernel: efi: Remove mem149: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 14 05:09:46.825736 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 14 05:09:46.825745 kernel: secureboot: Secure boot disabled May 14 05:09:46.825752 kernel: SMBIOS 2.8 present. May 14 05:09:46.825760 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 14 05:09:46.825767 kernel: DMI: Memory slots populated: 1/1 May 14 05:09:46.825774 kernel: Hypervisor detected: KVM May 14 05:09:46.825781 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 05:09:46.825788 kernel: kvm-clock: using sched offset of 3576378374 cycles May 14 05:09:46.825795 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 05:09:46.825803 kernel: tsc: Detected 2794.748 MHz processor May 14 05:09:46.825811 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 05:09:46.825818 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 05:09:46.825828 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 14 05:09:46.825835 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 14 05:09:46.825843 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 05:09:46.825850 kernel: Using GB pages for direct mapping May 14 05:09:46.825857 kernel: ACPI: Early table checksum verification disabled May 14 05:09:46.825865 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 14 05:09:46.825872 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 14 05:09:46.825880 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 05:09:46.825887 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 05:09:46.825897 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 14 05:09:46.825904 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 05:09:46.825911 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 05:09:46.825919 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 05:09:46.825926 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 05:09:46.825933 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 14 05:09:46.825941 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 14 05:09:46.825948 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 14 05:09:46.825957 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 14 05:09:46.825964 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 14 05:09:46.825971 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 14 05:09:46.825979 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 14 05:09:46.825986 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 14 05:09:46.825993 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 14 05:09:46.826000 kernel: No NUMA configuration found May 14 05:09:46.826008 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 14 05:09:46.826015 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 14 05:09:46.826025 kernel: Zone ranges: May 14 05:09:46.826032 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 05:09:46.826046 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 14 05:09:46.826053 kernel: Normal empty May 14 05:09:46.826060 kernel: Device empty May 14 05:09:46.826068 kernel: Movable zone start for each node May 14 05:09:46.826075 kernel: Early memory node ranges May 14 05:09:46.826082 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 14 05:09:46.826090 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 14 05:09:46.826097 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 14 05:09:46.826107 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 14 05:09:46.826114 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 14 05:09:46.826121 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 14 05:09:46.826128 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 14 05:09:46.826136 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 14 05:09:46.826143 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 14 05:09:46.826150 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 05:09:46.826159 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 14 05:09:46.826176 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 14 05:09:46.826184 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 05:09:46.826191 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 14 05:09:46.826199 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 14 05:09:46.826209 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 14 05:09:46.826216 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 14 05:09:46.826224 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 14 05:09:46.826231 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 05:09:46.826239 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 05:09:46.826249 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 05:09:46.826257 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 05:09:46.826264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 05:09:46.826272 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 05:09:46.826279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 05:09:46.826287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 05:09:46.826294 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 05:09:46.826302 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 05:09:46.826309 kernel: TSC deadline timer available May 14 05:09:46.826341 kernel: CPU topo: Max. logical packages: 1 May 14 05:09:46.826348 kernel: CPU topo: Max. logical dies: 1 May 14 05:09:46.826356 kernel: CPU topo: Max. dies per package: 1 May 14 05:09:46.826363 kernel: CPU topo: Max. threads per core: 1 May 14 05:09:46.826371 kernel: CPU topo: Num. cores per package: 4 May 14 05:09:46.826378 kernel: CPU topo: Num. threads per package: 4 May 14 05:09:46.826386 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 14 05:09:46.826393 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 05:09:46.826401 kernel: kvm-guest: KVM setup pv remote TLB flush May 14 05:09:46.826408 kernel: kvm-guest: setup PV sched yield May 14 05:09:46.826418 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 14 05:09:46.826426 kernel: Booting paravirtualized kernel on KVM May 14 05:09:46.826434 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 05:09:46.826442 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 14 05:09:46.826449 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 14 05:09:46.826457 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 14 05:09:46.826465 kernel: pcpu-alloc: [0] 0 1 2 3 May 14 05:09:46.826472 kernel: kvm-guest: PV spinlocks enabled May 14 05:09:46.826480 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 14 05:09:46.826491 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bd5d20a479abde3485dc2e7b97a54e804895b9926289ae86f84794bef32a40f3 May 14 05:09:46.826499 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 05:09:46.826506 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 05:09:46.826514 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 05:09:46.826522 kernel: Fallback order for Node 0: 0 May 14 05:09:46.826529 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 14 05:09:46.826537 kernel: Policy zone: DMA32 May 14 05:09:46.826544 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 05:09:46.826554 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 05:09:46.826562 kernel: ftrace: allocating 40065 entries in 157 pages May 14 05:09:46.826569 kernel: ftrace: allocated 157 pages with 5 groups May 14 05:09:46.826577 kernel: Dynamic Preempt: voluntary May 14 05:09:46.826585 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 05:09:46.826593 kernel: rcu: RCU event tracing is enabled. May 14 05:09:46.826601 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 05:09:46.826609 kernel: Trampoline variant of Tasks RCU enabled. May 14 05:09:46.826616 kernel: Rude variant of Tasks RCU enabled. May 14 05:09:46.826626 kernel: Tracing variant of Tasks RCU enabled. May 14 05:09:46.826634 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 05:09:46.826642 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 05:09:46.826650 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 05:09:46.826657 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 05:09:46.826665 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 05:09:46.826673 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 14 05:09:46.826681 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 05:09:46.826688 kernel: Console: colour dummy device 80x25 May 14 05:09:46.826698 kernel: printk: legacy console [ttyS0] enabled May 14 05:09:46.826706 kernel: ACPI: Core revision 20240827 May 14 05:09:46.826714 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 05:09:46.826721 kernel: APIC: Switch to symmetric I/O mode setup May 14 05:09:46.826729 kernel: x2apic enabled May 14 05:09:46.826736 kernel: APIC: Switched APIC routing to: physical x2apic May 14 05:09:46.826744 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 14 05:09:46.826752 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 14 05:09:46.826759 kernel: kvm-guest: setup PV IPIs May 14 05:09:46.826767 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 05:09:46.826777 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 14 05:09:46.826785 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 14 05:09:46.826793 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 05:09:46.826800 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 05:09:46.826808 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 05:09:46.826815 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 05:09:46.826823 kernel: Spectre V2 : Mitigation: Retpolines May 14 05:09:46.826830 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 14 05:09:46.826840 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 14 05:09:46.826848 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 14 05:09:46.826856 kernel: RETBleed: Mitigation: untrained return thunk May 14 05:09:46.826863 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 05:09:46.826871 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 05:09:46.826879 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 14 05:09:46.826887 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 14 05:09:46.826894 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 14 05:09:46.826902 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 05:09:46.826912 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 05:09:46.826920 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 05:09:46.826927 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 05:09:46.826935 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 05:09:46.826943 kernel: Freeing SMP alternatives memory: 32K May 14 05:09:46.826950 kernel: pid_max: default: 32768 minimum: 301 May 14 05:09:46.826958 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 05:09:46.826965 kernel: landlock: Up and running. May 14 05:09:46.826986 kernel: SELinux: Initializing. May 14 05:09:46.827002 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 05:09:46.827010 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 05:09:46.827017 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 14 05:09:46.827025 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 05:09:46.827033 kernel: ... version: 0 May 14 05:09:46.827045 kernel: ... bit width: 48 May 14 05:09:46.827053 kernel: ... generic registers: 6 May 14 05:09:46.827061 kernel: ... value mask: 0000ffffffffffff May 14 05:09:46.827068 kernel: ... max period: 00007fffffffffff May 14 05:09:46.827079 kernel: ... fixed-purpose events: 0 May 14 05:09:46.827087 kernel: ... event mask: 000000000000003f May 14 05:09:46.827095 kernel: signal: max sigframe size: 1776 May 14 05:09:46.827102 kernel: rcu: Hierarchical SRCU implementation. May 14 05:09:46.827110 kernel: rcu: Max phase no-delay instances is 400. May 14 05:09:46.827118 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 05:09:46.827126 kernel: smp: Bringing up secondary CPUs ... May 14 05:09:46.827133 kernel: smpboot: x86: Booting SMP configuration: May 14 05:09:46.827141 kernel: .... node #0, CPUs: #1 #2 #3 May 14 05:09:46.827151 kernel: smp: Brought up 1 node, 4 CPUs May 14 05:09:46.827158 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 14 05:09:46.827166 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 137196K reserved, 0K cma-reserved) May 14 05:09:46.827174 kernel: devtmpfs: initialized May 14 05:09:46.827182 kernel: x86/mm: Memory block size: 128MB May 14 05:09:46.827189 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 14 05:09:46.827197 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 14 05:09:46.827205 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 14 05:09:46.827212 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 14 05:09:46.827222 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 14 05:09:46.827230 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 14 05:09:46.827238 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 05:09:46.827246 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 05:09:46.827253 kernel: pinctrl core: initialized pinctrl subsystem May 14 05:09:46.827261 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 05:09:46.827268 kernel: audit: initializing netlink subsys (disabled) May 14 05:09:46.827276 kernel: audit: type=2000 audit(1747199385.548:1): state=initialized audit_enabled=0 res=1 May 14 05:09:46.827286 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 05:09:46.827294 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 05:09:46.827301 kernel: cpuidle: using governor menu May 14 05:09:46.827309 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 05:09:46.827330 kernel: dca service started, version 1.12.1 May 14 05:09:46.827338 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 14 05:09:46.827346 kernel: PCI: Using configuration type 1 for base access May 14 05:09:46.827353 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 05:09:46.827361 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 05:09:46.827371 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 05:09:46.827379 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 05:09:46.827386 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 05:09:46.827394 kernel: ACPI: Added _OSI(Module Device) May 14 05:09:46.827401 kernel: ACPI: Added _OSI(Processor Device) May 14 05:09:46.827409 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 05:09:46.827416 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 05:09:46.827424 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 05:09:46.827432 kernel: ACPI: Interpreter enabled May 14 05:09:46.827441 kernel: ACPI: PM: (supports S0 S3 S5) May 14 05:09:46.827449 kernel: ACPI: Using IOAPIC for interrupt routing May 14 05:09:46.827456 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 05:09:46.827464 kernel: PCI: Using E820 reservations for host bridge windows May 14 05:09:46.827472 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 05:09:46.827479 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 05:09:46.827648 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 05:09:46.827767 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 05:09:46.827884 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 05:09:46.827895 kernel: PCI host bridge to bus 0000:00 May 14 05:09:46.828014 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 05:09:46.828129 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 05:09:46.828247 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 05:09:46.828401 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 14 05:09:46.828509 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 14 05:09:46.828617 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 14 05:09:46.828722 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 05:09:46.828853 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 14 05:09:46.828984 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 14 05:09:46.829110 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 14 05:09:46.829272 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 14 05:09:46.829415 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 14 05:09:46.829530 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 05:09:46.829657 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 14 05:09:46.829773 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 14 05:09:46.829889 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 14 05:09:46.830003 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 14 05:09:46.830139 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 14 05:09:46.830260 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 14 05:09:46.830392 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 14 05:09:46.830508 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 14 05:09:46.830631 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 14 05:09:46.830744 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 14 05:09:46.830858 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 14 05:09:46.830971 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 14 05:09:46.831099 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 14 05:09:46.831221 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 14 05:09:46.831349 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 05:09:46.831475 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 14 05:09:46.831589 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 14 05:09:46.831704 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 14 05:09:46.831829 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 14 05:09:46.831948 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 14 05:09:46.831959 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 05:09:46.831967 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 05:09:46.831975 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 05:09:46.831982 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 05:09:46.831990 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 05:09:46.831997 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 05:09:46.832005 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 05:09:46.832016 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 05:09:46.832023 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 05:09:46.832031 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 05:09:46.832038 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 05:09:46.832054 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 05:09:46.832062 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 05:09:46.832070 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 05:09:46.832078 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 05:09:46.832085 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 05:09:46.832096 kernel: iommu: Default domain type: Translated May 14 05:09:46.832103 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 05:09:46.832111 kernel: efivars: Registered efivars operations May 14 05:09:46.832119 kernel: PCI: Using ACPI for IRQ routing May 14 05:09:46.832126 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 05:09:46.832134 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 14 05:09:46.832141 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 14 05:09:46.832149 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 14 05:09:46.832156 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 14 05:09:46.832166 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 14 05:09:46.832174 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 14 05:09:46.832182 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 14 05:09:46.832189 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 14 05:09:46.832304 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 05:09:46.832444 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 05:09:46.832577 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 05:09:46.832592 kernel: vgaarb: loaded May 14 05:09:46.832601 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 05:09:46.832608 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 05:09:46.832618 kernel: clocksource: Switched to clocksource kvm-clock May 14 05:09:46.832626 kernel: VFS: Disk quotas dquot_6.6.0 May 14 05:09:46.832636 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 05:09:46.832644 kernel: pnp: PnP ACPI init May 14 05:09:46.832791 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 14 05:09:46.832805 kernel: pnp: PnP ACPI: found 6 devices May 14 05:09:46.832816 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 05:09:46.832824 kernel: NET: Registered PF_INET protocol family May 14 05:09:46.832832 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 05:09:46.832840 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 05:09:46.832848 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 05:09:46.832856 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 05:09:46.832864 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 05:09:46.832872 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 05:09:46.832882 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 05:09:46.832890 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 05:09:46.832898 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 05:09:46.832906 kernel: NET: Registered PF_XDP protocol family May 14 05:09:46.833023 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 14 05:09:46.833148 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 14 05:09:46.833253 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 05:09:46.833375 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 05:09:46.833485 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 05:09:46.833589 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 14 05:09:46.833697 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 14 05:09:46.833800 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 14 05:09:46.833810 kernel: PCI: CLS 0 bytes, default 64 May 14 05:09:46.833819 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 14 05:09:46.833827 kernel: Initialise system trusted keyrings May 14 05:09:46.833838 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 05:09:46.833846 kernel: Key type asymmetric registered May 14 05:09:46.833854 kernel: Asymmetric key parser 'x509' registered May 14 05:09:46.833862 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 05:09:46.833870 kernel: io scheduler mq-deadline registered May 14 05:09:46.833878 kernel: io scheduler kyber registered May 14 05:09:46.833886 kernel: io scheduler bfq registered May 14 05:09:46.833894 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 05:09:46.833904 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 05:09:46.833912 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 05:09:46.833920 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 14 05:09:46.833928 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 05:09:46.833937 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 05:09:46.833945 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 05:09:46.833953 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 05:09:46.833961 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 05:09:46.834086 kernel: rtc_cmos 00:04: RTC can wake from S4 May 14 05:09:46.834104 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 05:09:46.834212 kernel: rtc_cmos 00:04: registered as rtc0 May 14 05:09:46.834346 kernel: rtc_cmos 00:04: setting system clock to 2025-05-14T05:09:46 UTC (1747199386) May 14 05:09:46.834457 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 14 05:09:46.834468 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 05:09:46.834476 kernel: efifb: probing for efifb May 14 05:09:46.834484 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 14 05:09:46.834496 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 14 05:09:46.834504 kernel: efifb: scrolling: redraw May 14 05:09:46.834511 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 14 05:09:46.834519 kernel: Console: switching to colour frame buffer device 160x50 May 14 05:09:46.834527 kernel: fb0: EFI VGA frame buffer device May 14 05:09:46.834535 kernel: pstore: Using crash dump compression: deflate May 14 05:09:46.834543 kernel: pstore: Registered efi_pstore as persistent store backend May 14 05:09:46.834551 kernel: NET: Registered PF_INET6 protocol family May 14 05:09:46.834559 kernel: Segment Routing with IPv6 May 14 05:09:46.834567 kernel: In-situ OAM (IOAM) with IPv6 May 14 05:09:46.834577 kernel: NET: Registered PF_PACKET protocol family May 14 05:09:46.834585 kernel: Key type dns_resolver registered May 14 05:09:46.834592 kernel: IPI shorthand broadcast: enabled May 14 05:09:46.834600 kernel: sched_clock: Marking stable (2771002099, 158878735)->(2944011604, -14130770) May 14 05:09:46.834608 kernel: registered taskstats version 1 May 14 05:09:46.834616 kernel: Loading compiled-in X.509 certificates May 14 05:09:46.834624 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: de56839f264dfa1264ece2be0efda2f53967cc2a' May 14 05:09:46.834632 kernel: Demotion targets for Node 0: null May 14 05:09:46.834640 kernel: Key type .fscrypt registered May 14 05:09:46.834650 kernel: Key type fscrypt-provisioning registered May 14 05:09:46.834658 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 05:09:46.834666 kernel: ima: Allocated hash algorithm: sha1 May 14 05:09:46.834674 kernel: ima: No architecture policies found May 14 05:09:46.834682 kernel: clk: Disabling unused clocks May 14 05:09:46.834690 kernel: Warning: unable to open an initial console. May 14 05:09:46.834698 kernel: Freeing unused kernel image (initmem) memory: 54416K May 14 05:09:46.834706 kernel: Write protecting the kernel read-only data: 24576k May 14 05:09:46.834716 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 14 05:09:46.834724 kernel: Run /init as init process May 14 05:09:46.834731 kernel: with arguments: May 14 05:09:46.834739 kernel: /init May 14 05:09:46.834747 kernel: with environment: May 14 05:09:46.834755 kernel: HOME=/ May 14 05:09:46.834762 kernel: TERM=linux May 14 05:09:46.834770 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 05:09:46.834779 systemd[1]: Successfully made /usr/ read-only. May 14 05:09:46.834792 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 05:09:46.834802 systemd[1]: Detected virtualization kvm. May 14 05:09:46.834810 systemd[1]: Detected architecture x86-64. May 14 05:09:46.834818 systemd[1]: Running in initrd. May 14 05:09:46.834826 systemd[1]: No hostname configured, using default hostname. May 14 05:09:46.834835 systemd[1]: Hostname set to . May 14 05:09:46.834843 systemd[1]: Initializing machine ID from VM UUID. May 14 05:09:46.834854 systemd[1]: Queued start job for default target initrd.target. May 14 05:09:46.834863 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 05:09:46.834871 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 05:09:46.834880 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 05:09:46.834889 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 05:09:46.834897 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 05:09:46.834907 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 05:09:46.834919 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 05:09:46.834928 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 05:09:46.834936 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 05:09:46.834945 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 05:09:46.834953 systemd[1]: Reached target paths.target - Path Units. May 14 05:09:46.834961 systemd[1]: Reached target slices.target - Slice Units. May 14 05:09:46.834970 systemd[1]: Reached target swap.target - Swaps. May 14 05:09:46.834978 systemd[1]: Reached target timers.target - Timer Units. May 14 05:09:46.834987 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 05:09:46.834997 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 05:09:46.835006 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 05:09:46.835014 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 05:09:46.835023 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 05:09:46.835031 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 05:09:46.835048 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 05:09:46.835056 systemd[1]: Reached target sockets.target - Socket Units. May 14 05:09:46.835065 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 05:09:46.835076 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 05:09:46.835085 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 05:09:46.835094 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 05:09:46.835102 systemd[1]: Starting systemd-fsck-usr.service... May 14 05:09:46.835111 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 05:09:46.835119 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 05:09:46.835128 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 05:09:46.835136 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 05:09:46.835149 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 05:09:46.835157 systemd[1]: Finished systemd-fsck-usr.service. May 14 05:09:46.835185 systemd-journald[220]: Collecting audit messages is disabled. May 14 05:09:46.835207 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 05:09:46.835216 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 05:09:46.835225 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 05:09:46.835233 systemd-journald[220]: Journal started May 14 05:09:46.835254 systemd-journald[220]: Runtime Journal (/run/log/journal/0ba2adc9566e4d3db1bfed1d43acbe2f) is 6M, max 48.5M, 42.4M free. May 14 05:09:46.827055 systemd-modules-load[222]: Inserted module 'overlay' May 14 05:09:46.838563 systemd[1]: Started systemd-journald.service - Journal Service. May 14 05:09:46.836861 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:09:46.851529 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 05:09:46.854165 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 05:09:46.856951 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 05:09:46.860146 kernel: Bridge firewalling registered May 14 05:09:46.859297 systemd-modules-load[222]: Inserted module 'br_netfilter' May 14 05:09:46.862540 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 05:09:46.862797 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 05:09:46.865047 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 05:09:46.871397 systemd-tmpfiles[238]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 05:09:46.876181 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 05:09:46.876824 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 05:09:46.879213 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 05:09:46.883202 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 05:09:46.889937 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 05:09:46.906949 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bd5d20a479abde3485dc2e7b97a54e804895b9926289ae86f84794bef32a40f3 May 14 05:09:46.924452 systemd-resolved[259]: Positive Trust Anchors: May 14 05:09:46.924468 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 05:09:46.924498 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 05:09:46.926914 systemd-resolved[259]: Defaulting to hostname 'linux'. May 14 05:09:46.927979 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 05:09:46.935913 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 05:09:47.011345 kernel: SCSI subsystem initialized May 14 05:09:47.020349 kernel: Loading iSCSI transport class v2.0-870. May 14 05:09:47.031348 kernel: iscsi: registered transport (tcp) May 14 05:09:47.051640 kernel: iscsi: registered transport (qla4xxx) May 14 05:09:47.051668 kernel: QLogic iSCSI HBA Driver May 14 05:09:47.071557 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 05:09:47.100240 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 05:09:47.101312 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 05:09:47.155960 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 05:09:47.159330 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 05:09:47.213342 kernel: raid6: avx2x4 gen() 30215 MB/s May 14 05:09:47.230348 kernel: raid6: avx2x2 gen() 31218 MB/s May 14 05:09:47.247419 kernel: raid6: avx2x1 gen() 26021 MB/s May 14 05:09:47.247443 kernel: raid6: using algorithm avx2x2 gen() 31218 MB/s May 14 05:09:47.265424 kernel: raid6: .... xor() 19977 MB/s, rmw enabled May 14 05:09:47.265452 kernel: raid6: using avx2x2 recovery algorithm May 14 05:09:47.285339 kernel: xor: automatically using best checksumming function avx May 14 05:09:47.447354 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 05:09:47.455453 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 05:09:47.457199 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 05:09:47.487334 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 14 05:09:47.492493 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 05:09:47.493365 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 05:09:47.516660 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation May 14 05:09:47.544911 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 05:09:47.548584 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 05:09:47.623749 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 05:09:47.627717 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 05:09:47.655342 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 14 05:09:47.680257 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 05:09:47.681454 kernel: cryptd: max_cpu_qlen set to 1000 May 14 05:09:47.681467 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 14 05:09:47.681477 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 05:09:47.681487 kernel: GPT:9289727 != 19775487 May 14 05:09:47.681497 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 05:09:47.681506 kernel: GPT:9289727 != 19775487 May 14 05:09:47.681516 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 05:09:47.681526 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 05:09:47.685339 kernel: AES CTR mode by8 optimization enabled May 14 05:09:47.686348 kernel: libata version 3.00 loaded. May 14 05:09:47.707724 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 05:09:47.707980 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:09:47.712701 kernel: ahci 0000:00:1f.2: version 3.0 May 14 05:09:47.736062 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 05:09:47.736077 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 14 05:09:47.736221 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 14 05:09:47.736370 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 05:09:47.736499 kernel: scsi host0: ahci May 14 05:09:47.736647 kernel: scsi host1: ahci May 14 05:09:47.736782 kernel: scsi host2: ahci May 14 05:09:47.736935 kernel: scsi host3: ahci May 14 05:09:47.737092 kernel: scsi host4: ahci May 14 05:09:47.737231 kernel: scsi host5: ahci May 14 05:09:47.737389 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 14 05:09:47.737401 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 14 05:09:47.737411 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 14 05:09:47.737421 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 14 05:09:47.737435 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 14 05:09:47.737445 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 14 05:09:47.713444 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 05:09:47.718872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 05:09:47.746487 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 05:09:47.749421 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:09:47.762752 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 05:09:47.773552 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 05:09:47.794976 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 05:09:47.797559 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 05:09:47.802897 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 05:09:47.803006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 05:09:47.803068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:09:47.807371 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 05:09:47.815867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 05:09:47.817289 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 05:09:47.825248 disk-uuid[633]: Primary Header is updated. May 14 05:09:47.825248 disk-uuid[633]: Secondary Entries is updated. May 14 05:09:47.825248 disk-uuid[633]: Secondary Header is updated. May 14 05:09:47.830341 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 05:09:47.835338 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 05:09:47.835640 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:09:48.040361 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 14 05:09:48.040428 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 05:09:48.048344 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 05:09:48.048377 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 05:09:48.048388 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 05:09:48.049349 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 14 05:09:48.050528 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 14 05:09:48.050541 kernel: ata3.00: applying bridge limits May 14 05:09:48.051551 kernel: ata3.00: configured for UDMA/100 May 14 05:09:48.052351 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 05:09:48.103851 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 14 05:09:48.123891 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 05:09:48.123904 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 14 05:09:48.478133 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 05:09:48.479908 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 05:09:48.481613 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 05:09:48.484017 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 05:09:48.487118 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 05:09:48.522068 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 05:09:48.835340 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 05:09:48.835743 disk-uuid[636]: The operation has completed successfully. May 14 05:09:48.865876 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 05:09:48.865995 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 05:09:48.897953 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 05:09:48.922442 sh[667]: Success May 14 05:09:48.941237 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 05:09:48.941271 kernel: device-mapper: uevent: version 1.0.3 May 14 05:09:48.941289 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 05:09:48.950453 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 14 05:09:48.981094 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 05:09:48.983097 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 05:09:49.000703 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 05:09:49.006438 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 05:09:49.006472 kernel: BTRFS: device fsid 522ba959-9153-4a92-926e-3277bc1060e7 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (679) May 14 05:09:49.007718 kernel: BTRFS info (device dm-0): first mount of filesystem 522ba959-9153-4a92-926e-3277bc1060e7 May 14 05:09:49.008590 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 05:09:49.008602 kernel: BTRFS info (device dm-0): using free-space-tree May 14 05:09:49.013033 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 05:09:49.015184 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 05:09:49.017413 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 05:09:49.019995 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 05:09:49.022484 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 05:09:49.047371 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (712) May 14 05:09:49.047432 kernel: BTRFS info (device vda6): first mount of filesystem 27ac52bc-c86c-4e09-9b91-c3f9e8d3f2a0 May 14 05:09:49.047443 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 05:09:49.048822 kernel: BTRFS info (device vda6): using free-space-tree May 14 05:09:49.055360 kernel: BTRFS info (device vda6): last unmount of filesystem 27ac52bc-c86c-4e09-9b91-c3f9e8d3f2a0 May 14 05:09:49.056141 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 05:09:49.058465 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 05:09:49.136386 ignition[753]: Ignition 2.21.0 May 14 05:09:49.136400 ignition[753]: Stage: fetch-offline May 14 05:09:49.136434 ignition[753]: no configs at "/usr/lib/ignition/base.d" May 14 05:09:49.138605 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 05:09:49.136443 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 05:09:49.136523 ignition[753]: parsed url from cmdline: "" May 14 05:09:49.142895 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 05:09:49.136527 ignition[753]: no config URL provided May 14 05:09:49.136532 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" May 14 05:09:49.136539 ignition[753]: no config at "/usr/lib/ignition/user.ign" May 14 05:09:49.136563 ignition[753]: op(1): [started] loading QEMU firmware config module May 14 05:09:49.136569 ignition[753]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 05:09:49.146883 ignition[753]: op(1): [finished] loading QEMU firmware config module May 14 05:09:49.181796 systemd-networkd[856]: lo: Link UP May 14 05:09:49.181805 systemd-networkd[856]: lo: Gained carrier May 14 05:09:49.183228 systemd-networkd[856]: Enumeration completed May 14 05:09:49.183801 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 05:09:49.183805 systemd-networkd[856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 05:09:49.184663 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 05:09:49.185554 systemd-networkd[856]: eth0: Link UP May 14 05:09:49.185557 systemd-networkd[856]: eth0: Gained carrier May 14 05:09:49.185565 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 05:09:49.186314 systemd[1]: Reached target network.target - Network. May 14 05:09:49.200730 ignition[753]: parsing config with SHA512: 2603f2fcd0c6a0560aa6082e8ad5a1d11f4b6ca38087c6847727b37da70af4fb58b83b1a139b9834b4cac82e4201ed82ad4059baeebe05f08b258b73035e17b6 May 14 05:09:49.203358 systemd-networkd[856]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 05:09:49.204492 unknown[753]: fetched base config from "system" May 14 05:09:49.204499 unknown[753]: fetched user config from "qemu" May 14 05:09:49.204856 ignition[753]: fetch-offline: fetch-offline passed May 14 05:09:49.204925 ignition[753]: Ignition finished successfully May 14 05:09:49.209311 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 05:09:49.210726 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 05:09:49.211836 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 05:09:49.260124 ignition[862]: Ignition 2.21.0 May 14 05:09:49.260136 ignition[862]: Stage: kargs May 14 05:09:49.260266 ignition[862]: no configs at "/usr/lib/ignition/base.d" May 14 05:09:49.260278 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 05:09:49.262107 ignition[862]: kargs: kargs passed May 14 05:09:49.262204 ignition[862]: Ignition finished successfully May 14 05:09:49.266667 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 05:09:49.268660 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 05:09:49.291399 ignition[871]: Ignition 2.21.0 May 14 05:09:49.291411 ignition[871]: Stage: disks May 14 05:09:49.291533 ignition[871]: no configs at "/usr/lib/ignition/base.d" May 14 05:09:49.291544 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 05:09:49.293847 ignition[871]: disks: disks passed May 14 05:09:49.293908 ignition[871]: Ignition finished successfully May 14 05:09:49.297724 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 05:09:49.299212 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 05:09:49.299873 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 05:09:49.301931 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 05:09:49.304208 systemd[1]: Reached target sysinit.target - System Initialization. May 14 05:09:49.306977 systemd[1]: Reached target basic.target - Basic System. May 14 05:09:49.308188 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 05:09:49.332706 systemd-resolved[259]: Detected conflict on linux IN A 10.0.0.84 May 14 05:09:49.332720 systemd-resolved[259]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. May 14 05:09:49.335918 systemd-fsck[881]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 05:09:49.344053 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 05:09:49.346568 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 05:09:49.455349 kernel: EXT4-fs (vda9): mounted filesystem 7fda6268-ffdc-406a-8662-dffb0e9a24fa r/w with ordered data mode. Quota mode: none. May 14 05:09:49.456072 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 05:09:49.456602 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 05:09:49.460088 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 05:09:49.460985 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 05:09:49.462800 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 05:09:49.462840 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 05:09:49.462862 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 05:09:49.478302 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 05:09:49.481201 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 05:09:49.485342 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (889) May 14 05:09:49.488208 kernel: BTRFS info (device vda6): first mount of filesystem 27ac52bc-c86c-4e09-9b91-c3f9e8d3f2a0 May 14 05:09:49.488237 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 05:09:49.488248 kernel: BTRFS info (device vda6): using free-space-tree May 14 05:09:49.493017 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 05:09:49.516094 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory May 14 05:09:49.520977 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory May 14 05:09:49.524356 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory May 14 05:09:49.528763 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory May 14 05:09:49.612247 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 05:09:49.614506 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 05:09:49.615216 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 05:09:49.637365 kernel: BTRFS info (device vda6): last unmount of filesystem 27ac52bc-c86c-4e09-9b91-c3f9e8d3f2a0 May 14 05:09:49.648905 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 05:09:49.663716 ignition[1006]: INFO : Ignition 2.21.0 May 14 05:09:49.663716 ignition[1006]: INFO : Stage: mount May 14 05:09:49.666193 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 05:09:49.666193 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 05:09:49.668502 ignition[1006]: INFO : mount: mount passed May 14 05:09:49.668502 ignition[1006]: INFO : Ignition finished successfully May 14 05:09:49.670166 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 05:09:49.672198 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 05:09:50.005754 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 05:09:50.007176 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 05:09:50.034689 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1018) May 14 05:09:50.034736 kernel: BTRFS info (device vda6): first mount of filesystem 27ac52bc-c86c-4e09-9b91-c3f9e8d3f2a0 May 14 05:09:50.034758 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 05:09:50.035553 kernel: BTRFS info (device vda6): using free-space-tree May 14 05:09:50.039050 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 05:09:50.065180 ignition[1035]: INFO : Ignition 2.21.0 May 14 05:09:50.065180 ignition[1035]: INFO : Stage: files May 14 05:09:50.066905 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 05:09:50.066905 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 05:09:50.069240 ignition[1035]: DEBUG : files: compiled without relabeling support, skipping May 14 05:09:50.069240 ignition[1035]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 05:09:50.069240 ignition[1035]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 05:09:50.073409 ignition[1035]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 05:09:50.073409 ignition[1035]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 05:09:50.073409 ignition[1035]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 05:09:50.072509 unknown[1035]: wrote ssh authorized keys file for user: core May 14 05:09:50.078819 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 05:09:50.078819 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 14 05:09:50.849475 systemd-networkd[856]: eth0: Gained IPv6LL May 14 05:09:50.861789 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 05:09:51.050407 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 05:09:51.052443 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 05:09:51.054511 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 05:09:51.056413 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 05:09:51.058404 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 05:09:51.060308 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 05:09:51.062281 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 05:09:51.064218 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 05:09:51.066220 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 05:09:51.071997 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 05:09:51.074036 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 05:09:51.076133 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 05:09:51.078901 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 05:09:51.078901 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 05:09:51.078901 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 14 05:09:51.575379 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 05:09:51.999582 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 14 05:09:51.999582 ignition[1035]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 05:09:52.003282 ignition[1035]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 05:09:52.007080 ignition[1035]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 05:09:52.007080 ignition[1035]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 05:09:52.007080 ignition[1035]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 05:09:52.011832 ignition[1035]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 05:09:52.011832 ignition[1035]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 05:09:52.011832 ignition[1035]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 05:09:52.011832 ignition[1035]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 14 05:09:52.025041 ignition[1035]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 05:09:52.029194 ignition[1035]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 05:09:52.030741 ignition[1035]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 14 05:09:52.030741 ignition[1035]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 14 05:09:52.030741 ignition[1035]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 14 05:09:52.030741 ignition[1035]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 05:09:52.030741 ignition[1035]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 05:09:52.030741 ignition[1035]: INFO : files: files passed May 14 05:09:52.030741 ignition[1035]: INFO : Ignition finished successfully May 14 05:09:52.039367 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 05:09:52.042580 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 05:09:52.044927 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 05:09:52.061292 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 05:09:52.061443 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 05:09:52.064669 initrd-setup-root-after-ignition[1064]: grep: /sysroot/oem/oem-release: No such file or directory May 14 05:09:52.066828 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 05:09:52.068538 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 05:09:52.068538 initrd-setup-root-after-ignition[1066]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 05:09:52.071658 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 05:09:52.073190 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 05:09:52.075009 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 05:09:52.123392 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 05:09:52.124506 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 05:09:52.127075 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 05:09:52.127149 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 05:09:52.129126 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 05:09:52.131030 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 05:09:52.159754 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 05:09:52.162268 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 05:09:52.186553 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 05:09:52.188902 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 05:09:52.189068 systemd[1]: Stopped target timers.target - Timer Units. May 14 05:09:52.191255 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 05:09:52.191407 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 05:09:52.196005 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 05:09:52.196137 systemd[1]: Stopped target basic.target - Basic System. May 14 05:09:52.198059 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 05:09:52.198389 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 05:09:52.198878 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 05:09:52.199215 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 05:09:52.199719 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 05:09:52.200061 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 05:09:52.200420 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 05:09:52.200741 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 05:09:52.214541 systemd[1]: Stopped target swap.target - Swaps. May 14 05:09:52.215476 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 05:09:52.215622 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 05:09:52.217600 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 05:09:52.217973 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 05:09:52.218263 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 05:09:52.223845 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 05:09:52.227392 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 05:09:52.227555 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 05:09:52.230585 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 05:09:52.230697 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 05:09:52.231818 systemd[1]: Stopped target paths.target - Path Units. May 14 05:09:52.233893 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 05:09:52.239423 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 05:09:52.240773 systemd[1]: Stopped target slices.target - Slice Units. May 14 05:09:52.243116 systemd[1]: Stopped target sockets.target - Socket Units. May 14 05:09:52.244011 systemd[1]: iscsid.socket: Deactivated successfully. May 14 05:09:52.244115 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 05:09:52.245742 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 05:09:52.245825 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 05:09:52.247452 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 05:09:52.247562 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 05:09:52.247921 systemd[1]: ignition-files.service: Deactivated successfully. May 14 05:09:52.248018 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 05:09:52.252854 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 05:09:52.256164 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 05:09:52.257873 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 05:09:52.257997 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 05:09:52.260115 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 05:09:52.260252 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 05:09:52.266389 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 05:09:52.266497 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 05:09:52.280500 ignition[1091]: INFO : Ignition 2.21.0 May 14 05:09:52.280500 ignition[1091]: INFO : Stage: umount May 14 05:09:52.282374 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 05:09:52.282374 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 05:09:52.282374 ignition[1091]: INFO : umount: umount passed May 14 05:09:52.282374 ignition[1091]: INFO : Ignition finished successfully May 14 05:09:52.284722 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 05:09:52.285303 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 05:09:52.285430 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 05:09:52.285757 systemd[1]: Stopped target network.target - Network. May 14 05:09:52.289632 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 05:09:52.289693 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 05:09:52.290839 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 05:09:52.290921 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 05:09:52.293133 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 05:09:52.293192 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 05:09:52.293724 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 05:09:52.293768 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 05:09:52.294205 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 05:09:52.294706 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 05:09:52.311916 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 05:09:52.312058 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 05:09:52.316570 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 05:09:52.316850 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 05:09:52.316904 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 05:09:52.321268 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 05:09:52.323724 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 05:09:52.323850 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 05:09:52.327683 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 05:09:52.327860 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 05:09:52.331032 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 05:09:52.331079 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 05:09:52.334298 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 05:09:52.334379 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 05:09:52.334428 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 05:09:52.334759 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 05:09:52.334798 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 05:09:52.340353 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 05:09:52.340400 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 05:09:52.340724 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 05:09:52.342460 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 05:09:52.360824 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 05:09:52.361071 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 05:09:52.362769 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 05:09:52.362855 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 05:09:52.364492 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 05:09:52.364538 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 05:09:52.366616 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 05:09:52.366676 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 05:09:52.367312 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 05:09:52.367458 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 05:09:52.368138 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 05:09:52.368191 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 05:09:52.369825 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 05:09:52.377382 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 05:09:52.377437 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 05:09:52.383052 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 05:09:52.383111 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 05:09:52.385601 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 05:09:52.385648 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 05:09:52.390477 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 05:09:52.390549 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 05:09:52.393072 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 05:09:52.393123 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:09:52.395966 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 05:09:52.396073 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 05:09:52.397812 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 05:09:52.397921 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 05:09:52.435306 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 05:09:52.435439 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 05:09:52.437511 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 05:09:52.438294 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 05:09:52.438363 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 05:09:52.444195 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 05:09:52.470616 systemd[1]: Switching root. May 14 05:09:52.517693 systemd-journald[220]: Journal stopped May 14 05:09:53.639682 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 14 05:09:53.639757 kernel: SELinux: policy capability network_peer_controls=1 May 14 05:09:53.639771 kernel: SELinux: policy capability open_perms=1 May 14 05:09:53.639785 kernel: SELinux: policy capability extended_socket_class=1 May 14 05:09:53.639796 kernel: SELinux: policy capability always_check_network=0 May 14 05:09:53.639808 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 05:09:53.639824 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 05:09:53.639836 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 05:09:53.639857 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 05:09:53.639868 kernel: SELinux: policy capability userspace_initial_context=0 May 14 05:09:53.639886 kernel: audit: type=1403 audit(1747199392.872:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 05:09:53.639898 systemd[1]: Successfully loaded SELinux policy in 47.018ms. May 14 05:09:53.639924 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.976ms. May 14 05:09:53.639937 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 05:09:53.639950 systemd[1]: Detected virtualization kvm. May 14 05:09:53.639961 systemd[1]: Detected architecture x86-64. May 14 05:09:53.639974 systemd[1]: Detected first boot. May 14 05:09:53.639985 systemd[1]: Initializing machine ID from VM UUID. May 14 05:09:53.639997 zram_generator::config[1136]: No configuration found. May 14 05:09:53.640010 kernel: Guest personality initialized and is inactive May 14 05:09:53.640023 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 05:09:53.640035 kernel: Initialized host personality May 14 05:09:53.640046 kernel: NET: Registered PF_VSOCK protocol family May 14 05:09:53.640057 systemd[1]: Populated /etc with preset unit settings. May 14 05:09:53.640070 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 05:09:53.640082 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 05:09:53.640094 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 05:09:53.640105 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 05:09:53.640118 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 05:09:53.640134 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 05:09:53.640146 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 05:09:53.640158 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 05:09:53.640170 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 05:09:53.640183 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 05:09:53.640195 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 05:09:53.640207 systemd[1]: Created slice user.slice - User and Session Slice. May 14 05:09:53.640219 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 05:09:53.640231 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 05:09:53.640245 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 05:09:53.640257 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 05:09:53.640270 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 05:09:53.640283 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 05:09:53.640295 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 05:09:53.640307 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 05:09:53.640379 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 05:09:53.640395 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 05:09:53.640407 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 05:09:53.640425 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 05:09:53.640436 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 05:09:53.640448 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 05:09:53.640462 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 05:09:53.640474 systemd[1]: Reached target slices.target - Slice Units. May 14 05:09:53.640486 systemd[1]: Reached target swap.target - Swaps. May 14 05:09:53.640498 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 05:09:53.640510 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 05:09:53.640524 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 05:09:53.640537 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 05:09:53.640549 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 05:09:53.640561 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 05:09:53.640573 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 05:09:53.640585 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 05:09:53.640597 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 05:09:53.640609 systemd[1]: Mounting media.mount - External Media Directory... May 14 05:09:53.640621 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:09:53.640635 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 05:09:53.640647 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 05:09:53.640659 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 05:09:53.640671 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 05:09:53.640683 systemd[1]: Reached target machines.target - Containers. May 14 05:09:53.640695 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 05:09:53.640708 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 05:09:53.640721 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 05:09:53.640736 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 05:09:53.640748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 05:09:53.640760 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 05:09:53.640772 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 05:09:53.640784 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 05:09:53.640796 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 05:09:53.640809 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 05:09:53.640821 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 05:09:53.640835 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 05:09:53.640855 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 05:09:53.640868 systemd[1]: Stopped systemd-fsck-usr.service. May 14 05:09:53.640881 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 05:09:53.640893 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 05:09:53.640905 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 05:09:53.640917 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 05:09:53.640930 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 05:09:53.640942 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 05:09:53.640956 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 05:09:53.640968 systemd[1]: verity-setup.service: Deactivated successfully. May 14 05:09:53.640980 kernel: ACPI: bus type drm_connector registered May 14 05:09:53.640996 systemd[1]: Stopped verity-setup.service. May 14 05:09:53.641009 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:09:53.641021 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 05:09:53.641032 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 05:09:53.641044 kernel: fuse: init (API version 7.41) May 14 05:09:53.641056 systemd[1]: Mounted media.mount - External Media Directory. May 14 05:09:53.641068 kernel: loop: module loaded May 14 05:09:53.641081 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 05:09:53.641094 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 05:09:53.641106 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 05:09:53.641118 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 05:09:53.641149 systemd-journald[1211]: Collecting audit messages is disabled. May 14 05:09:53.641171 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 05:09:53.641184 systemd-journald[1211]: Journal started May 14 05:09:53.641208 systemd-journald[1211]: Runtime Journal (/run/log/journal/0ba2adc9566e4d3db1bfed1d43acbe2f) is 6M, max 48.5M, 42.4M free. May 14 05:09:53.382234 systemd[1]: Queued start job for default target multi-user.target. May 14 05:09:53.402147 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 05:09:53.402591 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 05:09:53.642338 systemd[1]: Started systemd-journald.service - Journal Service. May 14 05:09:53.644061 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 05:09:53.644294 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 05:09:53.645781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 05:09:53.645999 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 05:09:53.647533 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 05:09:53.647739 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 05:09:53.649072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 05:09:53.649287 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 05:09:53.650835 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 05:09:53.651056 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 05:09:53.652454 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 05:09:53.652679 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 05:09:53.654074 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 05:09:53.655519 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 05:09:53.657058 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 05:09:53.658599 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 05:09:53.672803 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 05:09:53.675543 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 05:09:53.677967 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 05:09:53.679198 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 05:09:53.679285 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 05:09:53.681400 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 05:09:53.689924 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 05:09:53.691076 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 05:09:53.693920 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 05:09:53.697604 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 05:09:53.698946 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 05:09:53.700809 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 05:09:53.702142 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 05:09:53.705137 systemd-journald[1211]: Time spent on flushing to /var/log/journal/0ba2adc9566e4d3db1bfed1d43acbe2f is 22.738ms for 1066 entries. May 14 05:09:53.705137 systemd-journald[1211]: System Journal (/var/log/journal/0ba2adc9566e4d3db1bfed1d43acbe2f) is 8M, max 195.6M, 187.6M free. May 14 05:09:53.747576 systemd-journald[1211]: Received client request to flush runtime journal. May 14 05:09:53.747633 kernel: loop0: detected capacity change from 0 to 113872 May 14 05:09:53.705513 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 05:09:53.708310 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 05:09:53.711426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 05:09:53.716487 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 05:09:53.718680 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 05:09:53.720216 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 05:09:53.726278 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 05:09:53.727744 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 05:09:53.733158 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 05:09:53.750451 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 05:09:53.752441 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 05:09:53.757795 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. May 14 05:09:53.757814 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. May 14 05:09:53.766079 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 05:09:53.765623 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 05:09:53.767706 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 05:09:53.772951 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 05:09:53.784691 kernel: loop1: detected capacity change from 0 to 146240 May 14 05:09:53.811508 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 05:09:53.814377 kernel: loop2: detected capacity change from 0 to 210664 May 14 05:09:53.815039 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 05:09:53.843083 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. May 14 05:09:53.843407 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. May 14 05:09:53.848388 kernel: loop3: detected capacity change from 0 to 113872 May 14 05:09:53.848565 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 05:09:53.857360 kernel: loop4: detected capacity change from 0 to 146240 May 14 05:09:53.872342 kernel: loop5: detected capacity change from 0 to 210664 May 14 05:09:53.878490 (sd-merge)[1279]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 05:09:53.879028 (sd-merge)[1279]: Merged extensions into '/usr'. May 14 05:09:53.884613 systemd[1]: Reload requested from client PID 1255 ('systemd-sysext') (unit systemd-sysext.service)... May 14 05:09:53.884726 systemd[1]: Reloading... May 14 05:09:53.944518 zram_generator::config[1305]: No configuration found. May 14 05:09:54.016371 ldconfig[1250]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 05:09:54.049717 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 05:09:54.135107 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 05:09:54.135936 systemd[1]: Reloading finished in 250 ms. May 14 05:09:54.169999 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 05:09:54.171549 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 05:09:54.192731 systemd[1]: Starting ensure-sysext.service... May 14 05:09:54.194889 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 05:09:54.205639 systemd[1]: Reload requested from client PID 1343 ('systemctl') (unit ensure-sysext.service)... May 14 05:09:54.205659 systemd[1]: Reloading... May 14 05:09:54.219421 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 05:09:54.219480 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 05:09:54.219827 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 05:09:54.220109 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 05:09:54.220999 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 05:09:54.221396 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. May 14 05:09:54.221467 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. May 14 05:09:54.225429 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. May 14 05:09:54.225516 systemd-tmpfiles[1344]: Skipping /boot May 14 05:09:54.240064 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. May 14 05:09:54.240145 systemd-tmpfiles[1344]: Skipping /boot May 14 05:09:54.260347 zram_generator::config[1370]: No configuration found. May 14 05:09:54.352402 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 05:09:54.431331 systemd[1]: Reloading finished in 225 ms. May 14 05:09:54.454982 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 05:09:54.482243 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 05:09:54.491844 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 05:09:54.494273 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 05:09:54.511684 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 05:09:54.515524 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 05:09:54.520430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 05:09:54.523335 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 05:09:54.529460 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:09:54.529635 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 05:09:54.535709 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 05:09:54.537845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 05:09:54.540749 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 05:09:54.541912 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 05:09:54.542020 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 05:09:54.548504 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 05:09:54.549793 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:09:54.551249 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 05:09:54.553141 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 05:09:54.553385 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 05:09:54.555257 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 05:09:54.555568 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 05:09:54.557611 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 05:09:54.557878 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 05:09:54.567190 augenrules[1443]: No rules May 14 05:09:54.570219 systemd[1]: audit-rules.service: Deactivated successfully. May 14 05:09:54.570696 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 05:09:54.573102 systemd-udevd[1415]: Using default interface naming scheme 'v255'. May 14 05:09:54.574310 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 05:09:54.578980 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:09:54.579240 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 05:09:54.581679 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 05:09:54.586704 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 05:09:54.591711 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 05:09:54.592834 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 05:09:54.592949 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 05:09:54.594807 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 05:09:54.596375 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:09:54.597459 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 05:09:54.598006 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 05:09:54.601207 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 05:09:54.601492 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 05:09:54.603172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 05:09:54.604272 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 05:09:54.605993 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 05:09:54.606260 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 05:09:54.611989 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 05:09:54.618074 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 05:09:54.632713 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:09:54.635421 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 05:09:54.636517 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 05:09:54.638564 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 05:09:54.640507 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 05:09:54.643455 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 05:09:54.654675 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 05:09:54.655831 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 05:09:54.655903 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 05:09:54.658825 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 05:09:54.659851 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 05:09:54.659877 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:09:54.660523 systemd[1]: Finished ensure-sysext.service. May 14 05:09:54.661977 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 05:09:54.662195 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 05:09:54.663703 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 05:09:54.663911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 05:09:54.670972 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 05:09:54.674587 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 05:09:54.676415 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 05:09:54.676621 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 05:09:54.678117 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 05:09:54.678983 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 05:09:54.683891 augenrules[1489]: /sbin/augenrules: No change May 14 05:09:54.684057 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 05:09:54.695372 systemd-resolved[1413]: Positive Trust Anchors: May 14 05:09:54.695389 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 05:09:54.696464 augenrules[1522]: No rules May 14 05:09:54.695421 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 05:09:54.696557 systemd[1]: audit-rules.service: Deactivated successfully. May 14 05:09:54.696875 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 05:09:54.705119 systemd-resolved[1413]: Defaulting to hostname 'linux'. May 14 05:09:54.708069 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 05:09:54.709408 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 05:09:54.723805 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 05:09:54.752939 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 05:09:54.755550 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 05:09:54.772351 kernel: mousedev: PS/2 mouse device common for all mice May 14 05:09:54.786336 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 14 05:09:54.789641 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 05:09:54.791377 kernel: ACPI: button: Power Button [PWRF] May 14 05:09:54.799965 systemd-networkd[1497]: lo: Link UP May 14 05:09:54.799979 systemd-networkd[1497]: lo: Gained carrier May 14 05:09:54.801685 systemd-networkd[1497]: Enumeration completed May 14 05:09:54.801793 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 05:09:54.802249 systemd-networkd[1497]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 05:09:54.802261 systemd-networkd[1497]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 05:09:54.802779 systemd-networkd[1497]: eth0: Link UP May 14 05:09:54.803038 systemd-networkd[1497]: eth0: Gained carrier May 14 05:09:54.803059 systemd-networkd[1497]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 05:09:54.803213 systemd[1]: Reached target network.target - Network. May 14 05:09:54.808536 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 05:09:54.811701 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 05:09:54.812360 systemd-networkd[1497]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 05:09:54.823591 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 14 05:09:54.823846 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 05:09:54.824010 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 05:09:54.842376 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 05:09:54.843891 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 05:09:56.699965 systemd-resolved[1413]: Clock change detected. Flushing caches. May 14 05:09:56.700003 systemd-timesyncd[1508]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 05:09:56.700047 systemd-timesyncd[1508]: Initial clock synchronization to Wed 2025-05-14 05:09:56.699925 UTC. May 14 05:09:56.702038 systemd[1]: Reached target sysinit.target - System Initialization. May 14 05:09:56.703297 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 05:09:56.704553 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 05:09:56.705794 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 14 05:09:56.706930 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 05:09:56.708165 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 05:09:56.708263 systemd[1]: Reached target paths.target - Path Units. May 14 05:09:56.709156 systemd[1]: Reached target time-set.target - System Time Set. May 14 05:09:56.710325 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 05:09:56.711532 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 05:09:56.712858 systemd[1]: Reached target timers.target - Timer Units. May 14 05:09:56.715959 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 05:09:56.719535 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 05:09:56.724595 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 05:09:56.728256 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 05:09:56.729544 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 05:09:56.739646 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 05:09:56.741462 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 05:09:56.743604 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 05:09:56.756194 systemd[1]: Reached target sockets.target - Socket Units. May 14 05:09:56.757183 systemd[1]: Reached target basic.target - Basic System. May 14 05:09:56.758155 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 05:09:56.758184 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 05:09:56.759310 systemd[1]: Starting containerd.service - containerd container runtime... May 14 05:09:56.761957 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 05:09:56.764912 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 05:09:56.782234 jq[1559]: false May 14 05:09:56.805411 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 05:09:56.809838 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 05:09:56.810860 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 05:09:56.812839 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 14 05:09:56.814904 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 05:09:56.816848 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 05:09:56.818796 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 05:09:56.821929 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 05:09:56.831506 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing passwd entry cache May 14 05:09:56.831518 oslogin_cache_refresh[1566]: Refreshing passwd entry cache May 14 05:09:56.832681 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 05:09:56.834638 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 05:09:56.835219 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 05:09:56.836933 systemd[1]: Starting update-engine.service - Update Engine... May 14 05:09:56.838840 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 05:09:56.841717 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 05:09:56.843327 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 05:09:56.843615 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 05:09:56.843948 extend-filesystems[1564]: Found loop3 May 14 05:09:56.845177 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 05:09:56.845348 extend-filesystems[1564]: Found loop4 May 14 05:09:56.847803 extend-filesystems[1564]: Found loop5 May 14 05:09:56.847803 extend-filesystems[1564]: Found sr0 May 14 05:09:56.847803 extend-filesystems[1564]: Found vda May 14 05:09:56.847803 extend-filesystems[1564]: Found vda1 May 14 05:09:56.847803 extend-filesystems[1564]: Found vda2 May 14 05:09:56.847803 extend-filesystems[1564]: Found vda3 May 14 05:09:56.847803 extend-filesystems[1564]: Found usr May 14 05:09:56.847803 extend-filesystems[1564]: Found vda4 May 14 05:09:56.847803 extend-filesystems[1564]: Found vda6 May 14 05:09:56.847803 extend-filesystems[1564]: Found vda7 May 14 05:09:56.847803 extend-filesystems[1564]: Found vda9 May 14 05:09:56.847803 extend-filesystems[1564]: Checking size of /dev/vda9 May 14 05:09:56.845418 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 05:09:56.861039 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting users, quitting May 14 05:09:56.861039 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 05:09:56.861039 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing group entry cache May 14 05:09:56.861039 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting groups, quitting May 14 05:09:56.861039 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 05:09:56.848101 oslogin_cache_refresh[1566]: Failure getting users, quitting May 14 05:09:56.858462 systemd[1]: motdgen.service: Deactivated successfully. May 14 05:09:56.848124 oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 05:09:56.861100 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 05:09:56.848181 oslogin_cache_refresh[1566]: Refreshing group entry cache May 14 05:09:56.857640 oslogin_cache_refresh[1566]: Failure getting groups, quitting May 14 05:09:56.857651 oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 05:09:56.863770 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 14 05:09:56.865519 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 14 05:09:56.865662 jq[1580]: true May 14 05:09:56.883112 update_engine[1577]: I20250514 05:09:56.883044 1577 main.cc:92] Flatcar Update Engine starting May 14 05:09:56.891522 tar[1582]: linux-amd64/helm May 14 05:09:56.891474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 05:09:56.892337 extend-filesystems[1564]: Resized partition /dev/vda9 May 14 05:09:56.903337 jq[1592]: true May 14 05:09:56.903570 (ntainerd)[1598]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 05:09:56.906994 extend-filesystems[1604]: resize2fs 1.47.2 (1-Jan-2025) May 14 05:09:56.942350 kernel: kvm_amd: TSC scaling supported May 14 05:09:56.942476 kernel: kvm_amd: Nested Virtualization enabled May 14 05:09:56.942556 kernel: kvm_amd: Nested Paging enabled May 14 05:09:56.942586 kernel: kvm_amd: LBR virtualization supported May 14 05:09:56.942613 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 14 05:09:56.942635 kernel: kvm_amd: Virtual GIF supported May 14 05:09:56.957787 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 05:09:56.966483 dbus-daemon[1557]: [system] SELinux support is enabled May 14 05:09:56.967924 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 05:09:56.973978 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 05:09:56.974409 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 05:09:56.974496 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 05:09:56.974510 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 05:09:57.001861 systemd-logind[1573]: Watching system buttons on /dev/input/event2 (Power Button) May 14 05:09:57.001890 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 05:09:57.005583 systemd[1]: Started update-engine.service - Update Engine. May 14 05:09:57.005786 systemd-logind[1573]: New seat seat0. May 14 05:09:57.009846 update_engine[1577]: I20250514 05:09:57.009459 1577 update_check_scheduler.cc:74] Next update check in 7m59s May 14 05:09:57.017783 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 05:09:57.011879 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 05:09:57.013838 systemd[1]: Started systemd-logind.service - User Login Management. May 14 05:09:57.038943 kernel: EDAC MC: Ver: 3.0.0 May 14 05:09:57.039336 extend-filesystems[1604]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 05:09:57.039336 extend-filesystems[1604]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 05:09:57.039336 extend-filesystems[1604]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 05:09:57.052131 extend-filesystems[1564]: Resized filesystem in /dev/vda9 May 14 05:09:57.042972 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 05:09:57.054641 bash[1621]: Updated "/home/core/.ssh/authorized_keys" May 14 05:09:57.049234 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 05:09:57.064321 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 05:09:57.082481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:09:57.101215 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 05:09:57.121995 locksmithd[1625]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 05:09:57.152089 containerd[1598]: time="2025-05-14T05:09:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 05:09:57.156721 containerd[1598]: time="2025-05-14T05:09:57.155722383Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 05:09:57.164580 containerd[1598]: time="2025-05-14T05:09:57.164556113Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.068µs" May 14 05:09:57.164640 containerd[1598]: time="2025-05-14T05:09:57.164625863Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 05:09:57.164689 containerd[1598]: time="2025-05-14T05:09:57.164677410Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 05:09:57.164912 containerd[1598]: time="2025-05-14T05:09:57.164895710Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 05:09:57.164967 containerd[1598]: time="2025-05-14T05:09:57.164956153Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 05:09:57.165032 containerd[1598]: time="2025-05-14T05:09:57.165020664Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 05:09:57.165135 containerd[1598]: time="2025-05-14T05:09:57.165120501Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 05:09:57.165198 containerd[1598]: time="2025-05-14T05:09:57.165177157Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 05:09:57.165485 containerd[1598]: time="2025-05-14T05:09:57.165463645Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 05:09:57.165537 containerd[1598]: time="2025-05-14T05:09:57.165525290Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 05:09:57.165581 containerd[1598]: time="2025-05-14T05:09:57.165569553Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 05:09:57.165630 containerd[1598]: time="2025-05-14T05:09:57.165618485Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 05:09:57.165780 containerd[1598]: time="2025-05-14T05:09:57.165765511Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 05:09:57.166057 containerd[1598]: time="2025-05-14T05:09:57.166037601Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 05:09:57.166137 containerd[1598]: time="2025-05-14T05:09:57.166123753Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 05:09:57.166199 containerd[1598]: time="2025-05-14T05:09:57.166178536Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 05:09:57.166280 containerd[1598]: time="2025-05-14T05:09:57.166266631Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 05:09:57.166685 containerd[1598]: time="2025-05-14T05:09:57.166668875Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 05:09:57.166806 containerd[1598]: time="2025-05-14T05:09:57.166793329Z" level=info msg="metadata content store policy set" policy=shared May 14 05:09:57.173385 containerd[1598]: time="2025-05-14T05:09:57.173191781Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 05:09:57.173385 containerd[1598]: time="2025-05-14T05:09:57.173229682Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 05:09:57.173385 containerd[1598]: time="2025-05-14T05:09:57.173244019Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 05:09:57.173385 containerd[1598]: time="2025-05-14T05:09:57.173255620Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 05:09:57.173385 containerd[1598]: time="2025-05-14T05:09:57.173266260Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 05:09:57.173385 containerd[1598]: time="2025-05-14T05:09:57.173287340Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 05:09:57.173385 containerd[1598]: time="2025-05-14T05:09:57.173306085Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 05:09:57.173385 containerd[1598]: time="2025-05-14T05:09:57.173323007Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 05:09:57.173385 containerd[1598]: time="2025-05-14T05:09:57.173333156Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 05:09:57.173385 containerd[1598]: time="2025-05-14T05:09:57.173344287Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 05:09:57.173385 containerd[1598]: time="2025-05-14T05:09:57.173353354Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 05:09:57.173912 containerd[1598]: time="2025-05-14T05:09:57.173891964Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 05:09:57.174664 sshd_keygen[1599]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 05:09:57.174902 containerd[1598]: time="2025-05-14T05:09:57.174831105Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 05:09:57.174902 containerd[1598]: time="2025-05-14T05:09:57.174888193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 05:09:57.174954 containerd[1598]: time="2025-05-14T05:09:57.174907449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 05:09:57.174954 containerd[1598]: time="2025-05-14T05:09:57.174920373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 05:09:57.174954 containerd[1598]: time="2025-05-14T05:09:57.174934900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 05:09:57.174954 containerd[1598]: time="2025-05-14T05:09:57.174948896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 05:09:57.175032 containerd[1598]: time="2025-05-14T05:09:57.174963173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 05:09:57.175032 containerd[1598]: time="2025-05-14T05:09:57.174974374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 05:09:57.175032 containerd[1598]: time="2025-05-14T05:09:57.174989252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 05:09:57.175032 containerd[1598]: time="2025-05-14T05:09:57.175008749Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 05:09:57.175032 containerd[1598]: time="2025-05-14T05:09:57.175022324Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 05:09:57.175146 containerd[1598]: time="2025-05-14T05:09:57.175093578Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 05:09:57.175173 containerd[1598]: time="2025-05-14T05:09:57.175150384Z" level=info msg="Start snapshots syncer" May 14 05:09:57.175200 containerd[1598]: time="2025-05-14T05:09:57.175167707Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 05:09:57.175553 containerd[1598]: time="2025-05-14T05:09:57.175511171Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 05:09:57.175658 containerd[1598]: time="2025-05-14T05:09:57.175570873Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 05:09:57.175680 containerd[1598]: time="2025-05-14T05:09:57.175653769Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 05:09:57.175828 containerd[1598]: time="2025-05-14T05:09:57.175773663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 05:09:57.175857 containerd[1598]: time="2025-05-14T05:09:57.175840359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 05:09:57.175877 containerd[1598]: time="2025-05-14T05:09:57.175856308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 05:09:57.175897 containerd[1598]: time="2025-05-14T05:09:57.175875314Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 05:09:57.175897 containerd[1598]: time="2025-05-14T05:09:57.175890362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 05:09:57.175966 containerd[1598]: time="2025-05-14T05:09:57.175901002Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 05:09:57.175966 containerd[1598]: time="2025-05-14T05:09:57.175914157Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 05:09:57.175966 containerd[1598]: time="2025-05-14T05:09:57.175938603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 05:09:57.175966 containerd[1598]: time="2025-05-14T05:09:57.175952108Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 05:09:57.176037 containerd[1598]: time="2025-05-14T05:09:57.175976644Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 05:09:57.177338 containerd[1598]: time="2025-05-14T05:09:57.177302491Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 05:09:57.177338 containerd[1598]: time="2025-05-14T05:09:57.177331796Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 05:09:57.177390 containerd[1598]: time="2025-05-14T05:09:57.177341805Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 05:09:57.177390 containerd[1598]: time="2025-05-14T05:09:57.177351293Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 05:09:57.177390 containerd[1598]: time="2025-05-14T05:09:57.177358977Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 05:09:57.177390 containerd[1598]: time="2025-05-14T05:09:57.177367593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 05:09:57.177390 containerd[1598]: time="2025-05-14T05:09:57.177378023Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 05:09:57.177486 containerd[1598]: time="2025-05-14T05:09:57.177395606Z" level=info msg="runtime interface created" May 14 05:09:57.177486 containerd[1598]: time="2025-05-14T05:09:57.177401367Z" level=info msg="created NRI interface" May 14 05:09:57.177486 containerd[1598]: time="2025-05-14T05:09:57.177408871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 05:09:57.177486 containerd[1598]: time="2025-05-14T05:09:57.177419721Z" level=info msg="Connect containerd service" May 14 05:09:57.177486 containerd[1598]: time="2025-05-14T05:09:57.177444277Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 05:09:57.178299 containerd[1598]: time="2025-05-14T05:09:57.178276528Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 05:09:57.199696 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 05:09:57.202821 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 05:09:57.226372 systemd[1]: issuegen.service: Deactivated successfully. May 14 05:09:57.226635 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 05:09:57.230649 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 05:09:57.258249 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 05:09:57.261690 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 05:09:57.264435 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 05:09:57.265910 systemd[1]: Reached target getty.target - Login Prompts. May 14 05:09:57.272016 containerd[1598]: time="2025-05-14T05:09:57.271955864Z" level=info msg="Start subscribing containerd event" May 14 05:09:57.272064 containerd[1598]: time="2025-05-14T05:09:57.272028460Z" level=info msg="Start recovering state" May 14 05:09:57.272252 containerd[1598]: time="2025-05-14T05:09:57.272206354Z" level=info msg="Start event monitor" May 14 05:09:57.272252 containerd[1598]: time="2025-05-14T05:09:57.272230539Z" level=info msg="Start cni network conf syncer for default" May 14 05:09:57.272252 containerd[1598]: time="2025-05-14T05:09:57.272240267Z" level=info msg="Start streaming server" May 14 05:09:57.272252 containerd[1598]: time="2025-05-14T05:09:57.272250386Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 05:09:57.272356 containerd[1598]: time="2025-05-14T05:09:57.272257840Z" level=info msg="runtime interface starting up..." May 14 05:09:57.272356 containerd[1598]: time="2025-05-14T05:09:57.272276134Z" level=info msg="starting plugins..." May 14 05:09:57.272356 containerd[1598]: time="2025-05-14T05:09:57.272278960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 05:09:57.272356 containerd[1598]: time="2025-05-14T05:09:57.272351185Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 05:09:57.272430 containerd[1598]: time="2025-05-14T05:09:57.272292485Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 05:09:57.272636 containerd[1598]: time="2025-05-14T05:09:57.272605442Z" level=info msg="containerd successfully booted in 0.120974s" May 14 05:09:57.272667 systemd[1]: Started containerd.service - containerd container runtime. May 14 05:09:57.372533 tar[1582]: linux-amd64/LICENSE May 14 05:09:57.372662 tar[1582]: linux-amd64/README.md May 14 05:09:57.393874 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 05:09:57.760849 systemd-networkd[1497]: eth0: Gained IPv6LL May 14 05:09:57.764260 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 05:09:57.766084 systemd[1]: Reached target network-online.target - Network is Online. May 14 05:09:57.768671 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 05:09:57.770998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:09:57.787146 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 05:09:57.809515 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 05:09:57.811144 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 05:09:57.811400 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 05:09:57.814258 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 05:09:58.408308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:09:58.409898 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 05:09:58.411140 systemd[1]: Startup finished in 2.849s (kernel) + 6.229s (initrd) + 3.728s (userspace) = 12.807s. May 14 05:09:58.414189 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 05:09:58.840288 kubelet[1705]: E0514 05:09:58.840170 1705 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 05:09:58.843908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 05:09:58.844091 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 05:09:58.844456 systemd[1]: kubelet.service: Consumed 903ms CPU time, 243.8M memory peak. May 14 05:10:02.076471 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 05:10:02.077677 systemd[1]: Started sshd@0-10.0.0.84:22-10.0.0.1:53770.service - OpenSSH per-connection server daemon (10.0.0.1:53770). May 14 05:10:02.148541 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 53770 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:10:02.150296 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:02.156385 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 05:10:02.157404 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 05:10:02.164077 systemd-logind[1573]: New session 1 of user core. May 14 05:10:02.181514 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 05:10:02.184467 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 05:10:02.299783 (systemd)[1723]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 05:10:02.302130 systemd-logind[1573]: New session c1 of user core. May 14 05:10:02.453372 systemd[1723]: Queued start job for default target default.target. May 14 05:10:02.468962 systemd[1723]: Created slice app.slice - User Application Slice. May 14 05:10:02.468987 systemd[1723]: Reached target paths.target - Paths. May 14 05:10:02.469026 systemd[1723]: Reached target timers.target - Timers. May 14 05:10:02.470503 systemd[1723]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 05:10:02.480905 systemd[1723]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 05:10:02.480977 systemd[1723]: Reached target sockets.target - Sockets. May 14 05:10:02.481018 systemd[1723]: Reached target basic.target - Basic System. May 14 05:10:02.481076 systemd[1723]: Reached target default.target - Main User Target. May 14 05:10:02.481116 systemd[1723]: Startup finished in 172ms. May 14 05:10:02.481475 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 05:10:02.483083 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 05:10:02.547686 systemd[1]: Started sshd@1-10.0.0.84:22-10.0.0.1:53782.service - OpenSSH per-connection server daemon (10.0.0.1:53782). May 14 05:10:02.597399 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 53782 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:10:02.598720 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:02.602678 systemd-logind[1573]: New session 2 of user core. May 14 05:10:02.612840 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 05:10:02.664791 sshd[1736]: Connection closed by 10.0.0.1 port 53782 May 14 05:10:02.665094 sshd-session[1734]: pam_unix(sshd:session): session closed for user core May 14 05:10:02.682180 systemd[1]: sshd@1-10.0.0.84:22-10.0.0.1:53782.service: Deactivated successfully. May 14 05:10:02.683876 systemd[1]: session-2.scope: Deactivated successfully. May 14 05:10:02.684541 systemd-logind[1573]: Session 2 logged out. Waiting for processes to exit. May 14 05:10:02.687015 systemd[1]: Started sshd@2-10.0.0.84:22-10.0.0.1:53788.service - OpenSSH per-connection server daemon (10.0.0.1:53788). May 14 05:10:02.687513 systemd-logind[1573]: Removed session 2. May 14 05:10:02.742101 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 53788 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:10:02.743602 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:02.747414 systemd-logind[1573]: New session 3 of user core. May 14 05:10:02.756826 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 05:10:02.805294 sshd[1744]: Connection closed by 10.0.0.1 port 53788 May 14 05:10:02.805632 sshd-session[1742]: pam_unix(sshd:session): session closed for user core May 14 05:10:02.818044 systemd[1]: sshd@2-10.0.0.84:22-10.0.0.1:53788.service: Deactivated successfully. May 14 05:10:02.819634 systemd[1]: session-3.scope: Deactivated successfully. May 14 05:10:02.820375 systemd-logind[1573]: Session 3 logged out. Waiting for processes to exit. May 14 05:10:02.823246 systemd[1]: Started sshd@3-10.0.0.84:22-10.0.0.1:53794.service - OpenSSH per-connection server daemon (10.0.0.1:53794). May 14 05:10:02.823743 systemd-logind[1573]: Removed session 3. May 14 05:10:02.876989 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 53794 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:10:02.878262 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:02.882028 systemd-logind[1573]: New session 4 of user core. May 14 05:10:02.891811 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 05:10:02.943856 sshd[1752]: Connection closed by 10.0.0.1 port 53794 May 14 05:10:02.944151 sshd-session[1750]: pam_unix(sshd:session): session closed for user core May 14 05:10:02.962412 systemd[1]: sshd@3-10.0.0.84:22-10.0.0.1:53794.service: Deactivated successfully. May 14 05:10:02.964168 systemd[1]: session-4.scope: Deactivated successfully. May 14 05:10:02.964826 systemd-logind[1573]: Session 4 logged out. Waiting for processes to exit. May 14 05:10:02.967740 systemd[1]: Started sshd@4-10.0.0.84:22-10.0.0.1:53800.service - OpenSSH per-connection server daemon (10.0.0.1:53800). May 14 05:10:02.968242 systemd-logind[1573]: Removed session 4. May 14 05:10:03.014191 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 53800 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:10:03.015410 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:03.019446 systemd-logind[1573]: New session 5 of user core. May 14 05:10:03.029830 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 05:10:03.086310 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 05:10:03.086619 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 05:10:03.108085 sudo[1761]: pam_unix(sudo:session): session closed for user root May 14 05:10:03.109576 sshd[1760]: Connection closed by 10.0.0.1 port 53800 May 14 05:10:03.109917 sshd-session[1758]: pam_unix(sshd:session): session closed for user core May 14 05:10:03.125263 systemd[1]: sshd@4-10.0.0.84:22-10.0.0.1:53800.service: Deactivated successfully. May 14 05:10:03.126975 systemd[1]: session-5.scope: Deactivated successfully. May 14 05:10:03.127648 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. May 14 05:10:03.130206 systemd[1]: Started sshd@5-10.0.0.84:22-10.0.0.1:53802.service - OpenSSH per-connection server daemon (10.0.0.1:53802). May 14 05:10:03.130697 systemd-logind[1573]: Removed session 5. May 14 05:10:03.193527 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 53802 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:10:03.194869 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:03.198834 systemd-logind[1573]: New session 6 of user core. May 14 05:10:03.209841 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 05:10:03.262000 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 05:10:03.262305 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 05:10:03.485842 sudo[1771]: pam_unix(sudo:session): session closed for user root May 14 05:10:03.491949 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 05:10:03.492257 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 05:10:03.501930 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 05:10:03.556298 augenrules[1793]: No rules May 14 05:10:03.558053 systemd[1]: audit-rules.service: Deactivated successfully. May 14 05:10:03.558306 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 05:10:03.559419 sudo[1770]: pam_unix(sudo:session): session closed for user root May 14 05:10:03.560912 sshd[1769]: Connection closed by 10.0.0.1 port 53802 May 14 05:10:03.561176 sshd-session[1767]: pam_unix(sshd:session): session closed for user core May 14 05:10:03.573318 systemd[1]: sshd@5-10.0.0.84:22-10.0.0.1:53802.service: Deactivated successfully. May 14 05:10:03.575160 systemd[1]: session-6.scope: Deactivated successfully. May 14 05:10:03.575906 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. May 14 05:10:03.578765 systemd[1]: Started sshd@6-10.0.0.84:22-10.0.0.1:53818.service - OpenSSH per-connection server daemon (10.0.0.1:53818). May 14 05:10:03.579281 systemd-logind[1573]: Removed session 6. May 14 05:10:03.625338 sshd[1802]: Accepted publickey for core from 10.0.0.1 port 53818 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:10:03.626603 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:03.630697 systemd-logind[1573]: New session 7 of user core. May 14 05:10:03.636819 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 05:10:03.688836 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 05:10:03.689155 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 05:10:03.984289 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 05:10:04.005016 (dockerd)[1825]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 05:10:04.217645 dockerd[1825]: time="2025-05-14T05:10:04.217577525Z" level=info msg="Starting up" May 14 05:10:04.219314 dockerd[1825]: time="2025-05-14T05:10:04.219270360Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 05:10:04.284961 dockerd[1825]: time="2025-05-14T05:10:04.284849167Z" level=info msg="Loading containers: start." May 14 05:10:04.294732 kernel: Initializing XFRM netlink socket May 14 05:10:04.525686 systemd-networkd[1497]: docker0: Link UP May 14 05:10:04.531307 dockerd[1825]: time="2025-05-14T05:10:04.531260184Z" level=info msg="Loading containers: done." May 14 05:10:04.544350 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3434258742-merged.mount: Deactivated successfully. May 14 05:10:04.546506 dockerd[1825]: time="2025-05-14T05:10:04.546460686Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 05:10:04.546572 dockerd[1825]: time="2025-05-14T05:10:04.546548550Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 05:10:04.546682 dockerd[1825]: time="2025-05-14T05:10:04.546658907Z" level=info msg="Initializing buildkit" May 14 05:10:04.575426 dockerd[1825]: time="2025-05-14T05:10:04.575382875Z" level=info msg="Completed buildkit initialization" May 14 05:10:04.581793 dockerd[1825]: time="2025-05-14T05:10:04.581740190Z" level=info msg="Daemon has completed initialization" May 14 05:10:04.581919 dockerd[1825]: time="2025-05-14T05:10:04.581834276Z" level=info msg="API listen on /run/docker.sock" May 14 05:10:04.581925 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 05:10:05.702401 containerd[1598]: time="2025-05-14T05:10:05.702362337Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 05:10:06.275390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025929905.mount: Deactivated successfully. May 14 05:10:07.364269 containerd[1598]: time="2025-05-14T05:10:07.364211521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:07.365077 containerd[1598]: time="2025-05-14T05:10:07.365009918Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 14 05:10:07.366246 containerd[1598]: time="2025-05-14T05:10:07.366192316Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:07.368601 containerd[1598]: time="2025-05-14T05:10:07.368566680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:07.369418 containerd[1598]: time="2025-05-14T05:10:07.369383822Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 1.666984947s" May 14 05:10:07.369481 containerd[1598]: time="2025-05-14T05:10:07.369420311Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 14 05:10:07.386729 containerd[1598]: time="2025-05-14T05:10:07.386674445Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 05:10:08.858570 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 05:10:08.860228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:10:09.363549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:10:09.367100 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 05:10:09.452675 containerd[1598]: time="2025-05-14T05:10:09.452631219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:09.453771 containerd[1598]: time="2025-05-14T05:10:09.453685767Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 14 05:10:09.454881 containerd[1598]: time="2025-05-14T05:10:09.454851163Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:09.457546 containerd[1598]: time="2025-05-14T05:10:09.457509099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:09.458292 containerd[1598]: time="2025-05-14T05:10:09.458263954Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.07154146s" May 14 05:10:09.458292 containerd[1598]: time="2025-05-14T05:10:09.458287539Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 14 05:10:09.479923 kubelet[2118]: E0514 05:10:09.479868 2118 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 05:10:09.480873 containerd[1598]: time="2025-05-14T05:10:09.480038447Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 05:10:09.486456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 05:10:09.486644 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 05:10:09.487006 systemd[1]: kubelet.service: Consumed 202ms CPU time, 96.9M memory peak. May 14 05:10:10.434963 containerd[1598]: time="2025-05-14T05:10:10.434901367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:10.435790 containerd[1598]: time="2025-05-14T05:10:10.435750991Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 14 05:10:10.436920 containerd[1598]: time="2025-05-14T05:10:10.436892702Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:10.439372 containerd[1598]: time="2025-05-14T05:10:10.439323181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:10.440241 containerd[1598]: time="2025-05-14T05:10:10.440208151Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 960.137634ms" May 14 05:10:10.440241 containerd[1598]: time="2025-05-14T05:10:10.440236564Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 14 05:10:10.460211 containerd[1598]: time="2025-05-14T05:10:10.460176506Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 05:10:11.437357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2010577494.mount: Deactivated successfully. May 14 05:10:11.682525 containerd[1598]: time="2025-05-14T05:10:11.682472264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:11.683465 containerd[1598]: time="2025-05-14T05:10:11.683435791Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 14 05:10:11.684658 containerd[1598]: time="2025-05-14T05:10:11.684624951Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:11.686891 containerd[1598]: time="2025-05-14T05:10:11.686855625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:11.689503 containerd[1598]: time="2025-05-14T05:10:11.689260266Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.229046981s" May 14 05:10:11.689503 containerd[1598]: time="2025-05-14T05:10:11.689294159Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 14 05:10:11.707719 containerd[1598]: time="2025-05-14T05:10:11.707662664Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 05:10:12.305569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3782240104.mount: Deactivated successfully. May 14 05:10:13.211948 containerd[1598]: time="2025-05-14T05:10:13.211882249Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 14 05:10:13.212908 containerd[1598]: time="2025-05-14T05:10:13.212023885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:13.213542 containerd[1598]: time="2025-05-14T05:10:13.213485306Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:13.216256 containerd[1598]: time="2025-05-14T05:10:13.216224053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:13.217095 containerd[1598]: time="2025-05-14T05:10:13.217038401Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.509341543s" May 14 05:10:13.217095 containerd[1598]: time="2025-05-14T05:10:13.217086090Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 05:10:13.234955 containerd[1598]: time="2025-05-14T05:10:13.234922887Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 05:10:13.675952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2002466338.mount: Deactivated successfully. May 14 05:10:13.681132 containerd[1598]: time="2025-05-14T05:10:13.681079720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:13.681818 containerd[1598]: time="2025-05-14T05:10:13.681776336Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 14 05:10:13.682939 containerd[1598]: time="2025-05-14T05:10:13.682902649Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:13.684649 containerd[1598]: time="2025-05-14T05:10:13.684617736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:13.685231 containerd[1598]: time="2025-05-14T05:10:13.685193256Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 450.238509ms" May 14 05:10:13.685231 containerd[1598]: time="2025-05-14T05:10:13.685224775Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 14 05:10:13.704024 containerd[1598]: time="2025-05-14T05:10:13.703982850Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 05:10:14.210428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1221910365.mount: Deactivated successfully. May 14 05:10:15.869936 containerd[1598]: time="2025-05-14T05:10:15.869871017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:15.870770 containerd[1598]: time="2025-05-14T05:10:15.870721592Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 14 05:10:15.872135 containerd[1598]: time="2025-05-14T05:10:15.872074450Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:15.874540 containerd[1598]: time="2025-05-14T05:10:15.874499038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:15.875578 containerd[1598]: time="2025-05-14T05:10:15.875538457Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.171506616s" May 14 05:10:15.875578 containerd[1598]: time="2025-05-14T05:10:15.875573783Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 14 05:10:18.832874 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:10:18.833033 systemd[1]: kubelet.service: Consumed 202ms CPU time, 96.9M memory peak. May 14 05:10:18.835177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:10:18.852943 systemd[1]: Reload requested from client PID 2374 ('systemctl') (unit session-7.scope)... May 14 05:10:18.852961 systemd[1]: Reloading... May 14 05:10:18.941730 zram_generator::config[2423]: No configuration found. May 14 05:10:19.051067 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 05:10:19.165255 systemd[1]: Reloading finished in 311 ms. May 14 05:10:19.236394 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 05:10:19.236491 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 05:10:19.236803 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:10:19.236846 systemd[1]: kubelet.service: Consumed 127ms CPU time, 83.6M memory peak. May 14 05:10:19.238381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:10:19.395922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:10:19.400441 (kubelet)[2467]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 05:10:19.443963 kubelet[2467]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 05:10:19.443963 kubelet[2467]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 05:10:19.443963 kubelet[2467]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 05:10:19.445082 kubelet[2467]: I0514 05:10:19.445020 2467 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 05:10:19.717374 kubelet[2467]: I0514 05:10:19.717262 2467 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 05:10:19.717374 kubelet[2467]: I0514 05:10:19.717299 2467 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 05:10:19.717525 kubelet[2467]: I0514 05:10:19.717508 2467 server.go:927] "Client rotation is on, will bootstrap in background" May 14 05:10:19.735917 kubelet[2467]: I0514 05:10:19.735876 2467 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 05:10:19.737276 kubelet[2467]: E0514 05:10:19.737228 2467 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:19.748778 kubelet[2467]: I0514 05:10:19.748737 2467 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 05:10:19.750402 kubelet[2467]: I0514 05:10:19.750356 2467 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 05:10:19.750587 kubelet[2467]: I0514 05:10:19.750394 2467 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 05:10:19.750977 kubelet[2467]: I0514 05:10:19.750953 2467 topology_manager.go:138] "Creating topology manager with none policy" May 14 05:10:19.750977 kubelet[2467]: I0514 05:10:19.750969 2467 container_manager_linux.go:301] "Creating device plugin manager" May 14 05:10:19.751140 kubelet[2467]: I0514 05:10:19.751117 2467 state_mem.go:36] "Initialized new in-memory state store" May 14 05:10:19.751748 kubelet[2467]: I0514 05:10:19.751724 2467 kubelet.go:400] "Attempting to sync node with API server" May 14 05:10:19.751748 kubelet[2467]: I0514 05:10:19.751742 2467 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 05:10:19.751864 kubelet[2467]: I0514 05:10:19.751763 2467 kubelet.go:312] "Adding apiserver pod source" May 14 05:10:19.751864 kubelet[2467]: I0514 05:10:19.751783 2467 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 05:10:19.752333 kubelet[2467]: W0514 05:10:19.752283 2467 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:19.752369 kubelet[2467]: E0514 05:10:19.752334 2467 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:19.752641 kubelet[2467]: W0514 05:10:19.752595 2467 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:19.752641 kubelet[2467]: E0514 05:10:19.752636 2467 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:19.754964 kubelet[2467]: I0514 05:10:19.754907 2467 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 05:10:19.756072 kubelet[2467]: I0514 05:10:19.756050 2467 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 05:10:19.756125 kubelet[2467]: W0514 05:10:19.756115 2467 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 05:10:19.756803 kubelet[2467]: I0514 05:10:19.756781 2467 server.go:1264] "Started kubelet" May 14 05:10:19.756938 kubelet[2467]: I0514 05:10:19.756878 2467 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 05:10:19.757371 kubelet[2467]: I0514 05:10:19.757266 2467 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 05:10:19.757371 kubelet[2467]: I0514 05:10:19.757308 2467 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 05:10:19.758078 kubelet[2467]: I0514 05:10:19.758049 2467 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 05:10:19.758226 kubelet[2467]: I0514 05:10:19.758202 2467 server.go:455] "Adding debug handlers to kubelet server" May 14 05:10:19.759146 kubelet[2467]: E0514 05:10:19.759074 2467 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 05:10:19.759146 kubelet[2467]: I0514 05:10:19.759109 2467 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 05:10:19.759220 kubelet[2467]: I0514 05:10:19.759179 2467 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 05:10:19.759251 kubelet[2467]: I0514 05:10:19.759222 2467 reconciler.go:26] "Reconciler: start to sync state" May 14 05:10:19.759893 kubelet[2467]: W0514 05:10:19.759734 2467 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:19.759893 kubelet[2467]: E0514 05:10:19.759780 2467 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:19.760107 kubelet[2467]: E0514 05:10:19.759951 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="200ms" May 14 05:10:19.762087 kubelet[2467]: I0514 05:10:19.762067 2467 factory.go:221] Registration of the systemd container factory successfully May 14 05:10:19.762165 kubelet[2467]: I0514 05:10:19.762147 2467 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 05:10:19.762295 kubelet[2467]: E0514 05:10:19.761831 2467 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 05:10:19.763127 kubelet[2467]: E0514 05:10:19.762961 2467 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f4c9d7e5e98cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 05:10:19.756746955 +0000 UTC m=+0.352623151,LastTimestamp:2025-05-14 05:10:19.756746955 +0000 UTC m=+0.352623151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 05:10:19.763950 kubelet[2467]: I0514 05:10:19.763930 2467 factory.go:221] Registration of the containerd container factory successfully May 14 05:10:19.777954 kubelet[2467]: I0514 05:10:19.777738 2467 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 05:10:19.779652 kubelet[2467]: I0514 05:10:19.779241 2467 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 05:10:19.779652 kubelet[2467]: I0514 05:10:19.779287 2467 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 05:10:19.779652 kubelet[2467]: I0514 05:10:19.779312 2467 kubelet.go:2337] "Starting kubelet main sync loop" May 14 05:10:19.779652 kubelet[2467]: E0514 05:10:19.779357 2467 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 05:10:19.779652 kubelet[2467]: I0514 05:10:19.779437 2467 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 05:10:19.779652 kubelet[2467]: I0514 05:10:19.779450 2467 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 05:10:19.779652 kubelet[2467]: I0514 05:10:19.779478 2467 state_mem.go:36] "Initialized new in-memory state store" May 14 05:10:19.781013 kubelet[2467]: W0514 05:10:19.780964 2467 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:19.781242 kubelet[2467]: E0514 05:10:19.781017 2467 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:19.860501 kubelet[2467]: I0514 05:10:19.860452 2467 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 05:10:19.860940 kubelet[2467]: E0514 05:10:19.860901 2467 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" May 14 05:10:19.880179 kubelet[2467]: E0514 05:10:19.880134 2467 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 05:10:19.961561 kubelet[2467]: E0514 05:10:19.961476 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="400ms" May 14 05:10:20.063079 kubelet[2467]: I0514 05:10:20.062945 2467 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 05:10:20.063359 kubelet[2467]: E0514 05:10:20.063311 2467 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" May 14 05:10:20.080628 kubelet[2467]: E0514 05:10:20.080544 2467 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 05:10:20.107956 kubelet[2467]: I0514 05:10:20.107897 2467 policy_none.go:49] "None policy: Start" May 14 05:10:20.108938 kubelet[2467]: I0514 05:10:20.108899 2467 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 05:10:20.109001 kubelet[2467]: I0514 05:10:20.108940 2467 state_mem.go:35] "Initializing new in-memory state store" May 14 05:10:20.116004 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 05:10:20.129966 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 05:10:20.132973 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 05:10:20.153834 kubelet[2467]: I0514 05:10:20.153765 2467 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 05:10:20.154062 kubelet[2467]: I0514 05:10:20.154022 2467 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 05:10:20.154184 kubelet[2467]: I0514 05:10:20.154160 2467 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 05:10:20.155487 kubelet[2467]: E0514 05:10:20.155424 2467 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 05:10:20.362090 kubelet[2467]: E0514 05:10:20.362031 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="800ms" May 14 05:10:20.464464 kubelet[2467]: I0514 05:10:20.464420 2467 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 05:10:20.464875 kubelet[2467]: E0514 05:10:20.464690 2467 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" May 14 05:10:20.480901 kubelet[2467]: I0514 05:10:20.480854 2467 topology_manager.go:215] "Topology Admit Handler" podUID="4db57ba749f849dd4118c7f8007242ba" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 05:10:20.481655 kubelet[2467]: I0514 05:10:20.481627 2467 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 05:10:20.482776 kubelet[2467]: I0514 05:10:20.482305 2467 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 05:10:20.488387 systemd[1]: Created slice kubepods-burstable-pod4db57ba749f849dd4118c7f8007242ba.slice - libcontainer container kubepods-burstable-pod4db57ba749f849dd4118c7f8007242ba.slice. May 14 05:10:20.521652 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 14 05:10:20.525621 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 14 05:10:20.563864 kubelet[2467]: I0514 05:10:20.563817 2467 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:10:20.563864 kubelet[2467]: I0514 05:10:20.563852 2467 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4db57ba749f849dd4118c7f8007242ba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4db57ba749f849dd4118c7f8007242ba\") " pod="kube-system/kube-apiserver-localhost" May 14 05:10:20.563864 kubelet[2467]: I0514 05:10:20.563870 2467 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4db57ba749f849dd4118c7f8007242ba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4db57ba749f849dd4118c7f8007242ba\") " pod="kube-system/kube-apiserver-localhost" May 14 05:10:20.564076 kubelet[2467]: I0514 05:10:20.563885 2467 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:10:20.564076 kubelet[2467]: I0514 05:10:20.563929 2467 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:10:20.564076 kubelet[2467]: I0514 05:10:20.563985 2467 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4db57ba749f849dd4118c7f8007242ba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4db57ba749f849dd4118c7f8007242ba\") " pod="kube-system/kube-apiserver-localhost" May 14 05:10:20.564076 kubelet[2467]: I0514 05:10:20.564014 2467 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:10:20.564076 kubelet[2467]: I0514 05:10:20.564031 2467 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:10:20.564239 kubelet[2467]: I0514 05:10:20.564048 2467 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 05:10:20.779589 kubelet[2467]: E0514 05:10:20.779334 2467 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f4c9d7e5e98cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 05:10:19.756746955 +0000 UTC m=+0.352623151,LastTimestamp:2025-05-14 05:10:19.756746955 +0000 UTC m=+0.352623151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 05:10:20.802659 kubelet[2467]: W0514 05:10:20.802586 2467 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:20.802659 kubelet[2467]: E0514 05:10:20.802642 2467 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:20.818447 containerd[1598]: time="2025-05-14T05:10:20.818397237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4db57ba749f849dd4118c7f8007242ba,Namespace:kube-system,Attempt:0,}" May 14 05:10:20.825084 containerd[1598]: time="2025-05-14T05:10:20.825035449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 14 05:10:20.828776 containerd[1598]: time="2025-05-14T05:10:20.828695785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 14 05:10:20.937784 kubelet[2467]: W0514 05:10:20.937689 2467 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:20.937979 kubelet[2467]: E0514 05:10:20.937801 2467 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:21.053643 kubelet[2467]: W0514 05:10:21.053521 2467 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:21.053643 kubelet[2467]: E0514 05:10:21.053578 2467 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:21.113168 kubelet[2467]: W0514 05:10:21.113136 2467 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:21.113168 kubelet[2467]: E0514 05:10:21.113176 2467 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:21.163015 kubelet[2467]: E0514 05:10:21.162979 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="1.6s" May 14 05:10:21.266755 kubelet[2467]: I0514 05:10:21.266737 2467 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 05:10:21.267040 kubelet[2467]: E0514 05:10:21.267002 2467 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" May 14 05:10:21.766309 kubelet[2467]: E0514 05:10:21.766272 2467 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.84:6443: connect: connection refused May 14 05:10:21.773113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3290184844.mount: Deactivated successfully. May 14 05:10:21.779021 containerd[1598]: time="2025-05-14T05:10:21.778984525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 05:10:21.781578 containerd[1598]: time="2025-05-14T05:10:21.781549626Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 05:10:21.783747 containerd[1598]: time="2025-05-14T05:10:21.783698146Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 05:10:21.785724 containerd[1598]: time="2025-05-14T05:10:21.785679543Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 05:10:21.786357 containerd[1598]: time="2025-05-14T05:10:21.786335232Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 05:10:21.787351 containerd[1598]: time="2025-05-14T05:10:21.787298810Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 05:10:21.788109 containerd[1598]: time="2025-05-14T05:10:21.788072611Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 05:10:21.789036 containerd[1598]: time="2025-05-14T05:10:21.788998017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 05:10:21.789815 containerd[1598]: time="2025-05-14T05:10:21.789785815Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 962.191135ms" May 14 05:10:21.790733 containerd[1598]: time="2025-05-14T05:10:21.790675754Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 969.181006ms" May 14 05:10:21.792748 containerd[1598]: time="2025-05-14T05:10:21.792719918Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 958.282013ms" May 14 05:10:21.818169 containerd[1598]: time="2025-05-14T05:10:21.818124781Z" level=info msg="connecting to shim 925d4b8ae7d785d7e02c6bffd5e5cb95b852fb0db0c0f6b336e9e57451868552" address="unix:///run/containerd/s/1e45738540447f210944f044d08c5e187d56c58e18b5c59bb3a9ede9090bfd17" namespace=k8s.io protocol=ttrpc version=3 May 14 05:10:21.819235 containerd[1598]: time="2025-05-14T05:10:21.819177144Z" level=info msg="connecting to shim 9d921c31dd565bb03c107e2ef55d6c860a3171a014576e19f7d6c4a590fec08b" address="unix:///run/containerd/s/db86a4b95d9f1c4a11556b6afb8c8cffa9bbf2bb1528e2181dce16905bdbfccd" namespace=k8s.io protocol=ttrpc version=3 May 14 05:10:21.820260 containerd[1598]: time="2025-05-14T05:10:21.820211455Z" level=info msg="connecting to shim b17e969153c1b7c290c56250b2ed4e06b82302c9e6c6352fcc10c197bb66f105" address="unix:///run/containerd/s/827b6a6f47e693085d7304c88d5951bed63d9e1d7d0d78c791ec9d1097fcf1c0" namespace=k8s.io protocol=ttrpc version=3 May 14 05:10:21.854842 systemd[1]: Started cri-containerd-925d4b8ae7d785d7e02c6bffd5e5cb95b852fb0db0c0f6b336e9e57451868552.scope - libcontainer container 925d4b8ae7d785d7e02c6bffd5e5cb95b852fb0db0c0f6b336e9e57451868552. May 14 05:10:21.856280 systemd[1]: Started cri-containerd-9d921c31dd565bb03c107e2ef55d6c860a3171a014576e19f7d6c4a590fec08b.scope - libcontainer container 9d921c31dd565bb03c107e2ef55d6c860a3171a014576e19f7d6c4a590fec08b. May 14 05:10:21.858079 systemd[1]: Started cri-containerd-b17e969153c1b7c290c56250b2ed4e06b82302c9e6c6352fcc10c197bb66f105.scope - libcontainer container b17e969153c1b7c290c56250b2ed4e06b82302c9e6c6352fcc10c197bb66f105. May 14 05:10:21.905595 containerd[1598]: time="2025-05-14T05:10:21.903922643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d921c31dd565bb03c107e2ef55d6c860a3171a014576e19f7d6c4a590fec08b\"" May 14 05:10:21.909464 containerd[1598]: time="2025-05-14T05:10:21.909436796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4db57ba749f849dd4118c7f8007242ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"925d4b8ae7d785d7e02c6bffd5e5cb95b852fb0db0c0f6b336e9e57451868552\"" May 14 05:10:21.910824 containerd[1598]: time="2025-05-14T05:10:21.910804090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"b17e969153c1b7c290c56250b2ed4e06b82302c9e6c6352fcc10c197bb66f105\"" May 14 05:10:21.910948 containerd[1598]: time="2025-05-14T05:10:21.910909468Z" level=info msg="CreateContainer within sandbox \"9d921c31dd565bb03c107e2ef55d6c860a3171a014576e19f7d6c4a590fec08b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 05:10:21.911903 containerd[1598]: time="2025-05-14T05:10:21.911864509Z" level=info msg="CreateContainer within sandbox \"925d4b8ae7d785d7e02c6bffd5e5cb95b852fb0db0c0f6b336e9e57451868552\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 05:10:21.914050 containerd[1598]: time="2025-05-14T05:10:21.914027056Z" level=info msg="CreateContainer within sandbox \"b17e969153c1b7c290c56250b2ed4e06b82302c9e6c6352fcc10c197bb66f105\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 05:10:21.926922 containerd[1598]: time="2025-05-14T05:10:21.926884162Z" level=info msg="Container 3952486d060f159498777a31d2f2d84ec56c4a573720d36b69b491af145f5730: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:21.929251 containerd[1598]: time="2025-05-14T05:10:21.929212239Z" level=info msg="Container 36beead8e736db69bfb45ff391485e0bc719053fc61257b168a1bb1ed4ebc572: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:21.931896 containerd[1598]: time="2025-05-14T05:10:21.931863342Z" level=info msg="Container b13775bd940783cdf609bc26dfec6c95a68e99264da8284d3beced2d9115264e: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:21.938973 containerd[1598]: time="2025-05-14T05:10:21.938873200Z" level=info msg="CreateContainer within sandbox \"925d4b8ae7d785d7e02c6bffd5e5cb95b852fb0db0c0f6b336e9e57451868552\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"36beead8e736db69bfb45ff391485e0bc719053fc61257b168a1bb1ed4ebc572\"" May 14 05:10:21.939473 containerd[1598]: time="2025-05-14T05:10:21.939448559Z" level=info msg="StartContainer for \"36beead8e736db69bfb45ff391485e0bc719053fc61257b168a1bb1ed4ebc572\"" May 14 05:10:21.940835 containerd[1598]: time="2025-05-14T05:10:21.940805985Z" level=info msg="connecting to shim 36beead8e736db69bfb45ff391485e0bc719053fc61257b168a1bb1ed4ebc572" address="unix:///run/containerd/s/1e45738540447f210944f044d08c5e187d56c58e18b5c59bb3a9ede9090bfd17" protocol=ttrpc version=3 May 14 05:10:21.943690 containerd[1598]: time="2025-05-14T05:10:21.943652715Z" level=info msg="CreateContainer within sandbox \"9d921c31dd565bb03c107e2ef55d6c860a3171a014576e19f7d6c4a590fec08b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3952486d060f159498777a31d2f2d84ec56c4a573720d36b69b491af145f5730\"" May 14 05:10:21.944107 containerd[1598]: time="2025-05-14T05:10:21.944067643Z" level=info msg="StartContainer for \"3952486d060f159498777a31d2f2d84ec56c4a573720d36b69b491af145f5730\"" May 14 05:10:21.946718 containerd[1598]: time="2025-05-14T05:10:21.945228270Z" level=info msg="connecting to shim 3952486d060f159498777a31d2f2d84ec56c4a573720d36b69b491af145f5730" address="unix:///run/containerd/s/db86a4b95d9f1c4a11556b6afb8c8cffa9bbf2bb1528e2181dce16905bdbfccd" protocol=ttrpc version=3 May 14 05:10:21.946842 containerd[1598]: time="2025-05-14T05:10:21.946819445Z" level=info msg="CreateContainer within sandbox \"b17e969153c1b7c290c56250b2ed4e06b82302c9e6c6352fcc10c197bb66f105\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b13775bd940783cdf609bc26dfec6c95a68e99264da8284d3beced2d9115264e\"" May 14 05:10:21.947431 containerd[1598]: time="2025-05-14T05:10:21.947414832Z" level=info msg="StartContainer for \"b13775bd940783cdf609bc26dfec6c95a68e99264da8284d3beced2d9115264e\"" May 14 05:10:21.948468 containerd[1598]: time="2025-05-14T05:10:21.948442970Z" level=info msg="connecting to shim b13775bd940783cdf609bc26dfec6c95a68e99264da8284d3beced2d9115264e" address="unix:///run/containerd/s/827b6a6f47e693085d7304c88d5951bed63d9e1d7d0d78c791ec9d1097fcf1c0" protocol=ttrpc version=3 May 14 05:10:21.960918 systemd[1]: Started cri-containerd-36beead8e736db69bfb45ff391485e0bc719053fc61257b168a1bb1ed4ebc572.scope - libcontainer container 36beead8e736db69bfb45ff391485e0bc719053fc61257b168a1bb1ed4ebc572. May 14 05:10:21.965456 systemd[1]: Started cri-containerd-b13775bd940783cdf609bc26dfec6c95a68e99264da8284d3beced2d9115264e.scope - libcontainer container b13775bd940783cdf609bc26dfec6c95a68e99264da8284d3beced2d9115264e. May 14 05:10:21.968580 systemd[1]: Started cri-containerd-3952486d060f159498777a31d2f2d84ec56c4a573720d36b69b491af145f5730.scope - libcontainer container 3952486d060f159498777a31d2f2d84ec56c4a573720d36b69b491af145f5730. May 14 05:10:22.012394 containerd[1598]: time="2025-05-14T05:10:22.012342728Z" level=info msg="StartContainer for \"36beead8e736db69bfb45ff391485e0bc719053fc61257b168a1bb1ed4ebc572\" returns successfully" May 14 05:10:22.014095 containerd[1598]: time="2025-05-14T05:10:22.013987222Z" level=info msg="StartContainer for \"b13775bd940783cdf609bc26dfec6c95a68e99264da8284d3beced2d9115264e\" returns successfully" May 14 05:10:22.026655 containerd[1598]: time="2025-05-14T05:10:22.026517315Z" level=info msg="StartContainer for \"3952486d060f159498777a31d2f2d84ec56c4a573720d36b69b491af145f5730\" returns successfully" May 14 05:10:22.868893 kubelet[2467]: I0514 05:10:22.868849 2467 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 05:10:23.019998 kubelet[2467]: E0514 05:10:23.019947 2467 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 05:10:23.115140 kubelet[2467]: I0514 05:10:23.115091 2467 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 05:10:23.126347 kubelet[2467]: E0514 05:10:23.126245 2467 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 05:10:23.227045 kubelet[2467]: E0514 05:10:23.226994 2467 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 05:10:23.327897 kubelet[2467]: E0514 05:10:23.327862 2467 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 05:10:23.754148 kubelet[2467]: I0514 05:10:23.754108 2467 apiserver.go:52] "Watching apiserver" May 14 05:10:23.759612 kubelet[2467]: I0514 05:10:23.759569 2467 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 05:10:23.804579 kubelet[2467]: E0514 05:10:23.804538 2467 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 14 05:10:24.829179 systemd[1]: Reload requested from client PID 2743 ('systemctl') (unit session-7.scope)... May 14 05:10:24.829196 systemd[1]: Reloading... May 14 05:10:24.897799 zram_generator::config[2786]: No configuration found. May 14 05:10:24.992409 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 05:10:25.123361 systemd[1]: Reloading finished in 293 ms. May 14 05:10:25.155052 kubelet[2467]: I0514 05:10:25.154991 2467 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 05:10:25.155073 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:10:25.167231 systemd[1]: kubelet.service: Deactivated successfully. May 14 05:10:25.167527 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:10:25.167574 systemd[1]: kubelet.service: Consumed 748ms CPU time, 114.3M memory peak. May 14 05:10:25.170366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:10:25.368819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:10:25.378119 (kubelet)[2831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 05:10:25.422032 kubelet[2831]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 05:10:25.422032 kubelet[2831]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 05:10:25.422032 kubelet[2831]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 05:10:25.422423 kubelet[2831]: I0514 05:10:25.422104 2831 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 05:10:25.426587 kubelet[2831]: I0514 05:10:25.426546 2831 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 05:10:25.426587 kubelet[2831]: I0514 05:10:25.426571 2831 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 05:10:25.426810 kubelet[2831]: I0514 05:10:25.426785 2831 server.go:927] "Client rotation is on, will bootstrap in background" May 14 05:10:25.428040 kubelet[2831]: I0514 05:10:25.428014 2831 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 05:10:25.429105 kubelet[2831]: I0514 05:10:25.429082 2831 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 05:10:25.435534 kubelet[2831]: I0514 05:10:25.435505 2831 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 05:10:25.435751 kubelet[2831]: I0514 05:10:25.435717 2831 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 05:10:25.435892 kubelet[2831]: I0514 05:10:25.435746 2831 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 05:10:25.435969 kubelet[2831]: I0514 05:10:25.435902 2831 topology_manager.go:138] "Creating topology manager with none policy" May 14 05:10:25.435969 kubelet[2831]: I0514 05:10:25.435911 2831 container_manager_linux.go:301] "Creating device plugin manager" May 14 05:10:25.435969 kubelet[2831]: I0514 05:10:25.435950 2831 state_mem.go:36] "Initialized new in-memory state store" May 14 05:10:25.436035 kubelet[2831]: I0514 05:10:25.436029 2831 kubelet.go:400] "Attempting to sync node with API server" May 14 05:10:25.436060 kubelet[2831]: I0514 05:10:25.436039 2831 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 05:10:25.436060 kubelet[2831]: I0514 05:10:25.436057 2831 kubelet.go:312] "Adding apiserver pod source" May 14 05:10:25.436101 kubelet[2831]: I0514 05:10:25.436070 2831 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 05:10:25.437768 kubelet[2831]: I0514 05:10:25.437267 2831 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 05:10:25.437768 kubelet[2831]: I0514 05:10:25.437473 2831 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 05:10:25.438140 kubelet[2831]: I0514 05:10:25.438129 2831 server.go:1264] "Started kubelet" May 14 05:10:25.438678 kubelet[2831]: I0514 05:10:25.438646 2831 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 05:10:25.438851 kubelet[2831]: I0514 05:10:25.438812 2831 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 05:10:25.439129 kubelet[2831]: I0514 05:10:25.439116 2831 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 05:10:25.439618 kubelet[2831]: I0514 05:10:25.439594 2831 server.go:455] "Adding debug handlers to kubelet server" May 14 05:10:25.448049 kubelet[2831]: E0514 05:10:25.448008 2831 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 05:10:25.448470 kubelet[2831]: I0514 05:10:25.445864 2831 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 05:10:25.448756 kubelet[2831]: I0514 05:10:25.448741 2831 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 05:10:25.449596 kubelet[2831]: I0514 05:10:25.449585 2831 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 05:10:25.450009 kubelet[2831]: I0514 05:10:25.449995 2831 factory.go:221] Registration of the systemd container factory successfully May 14 05:10:25.450144 kubelet[2831]: I0514 05:10:25.450129 2831 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 05:10:25.450595 kubelet[2831]: I0514 05:10:25.450555 2831 reconciler.go:26] "Reconciler: start to sync state" May 14 05:10:25.451765 kubelet[2831]: I0514 05:10:25.451753 2831 factory.go:221] Registration of the containerd container factory successfully May 14 05:10:25.459161 kubelet[2831]: I0514 05:10:25.459079 2831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 05:10:25.460267 kubelet[2831]: I0514 05:10:25.460240 2831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 05:10:25.460331 kubelet[2831]: I0514 05:10:25.460277 2831 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 05:10:25.460331 kubelet[2831]: I0514 05:10:25.460293 2831 kubelet.go:2337] "Starting kubelet main sync loop" May 14 05:10:25.460387 kubelet[2831]: E0514 05:10:25.460329 2831 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 05:10:25.487722 kubelet[2831]: I0514 05:10:25.487677 2831 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 05:10:25.487722 kubelet[2831]: I0514 05:10:25.487695 2831 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 05:10:25.487879 kubelet[2831]: I0514 05:10:25.487739 2831 state_mem.go:36] "Initialized new in-memory state store" May 14 05:10:25.487901 kubelet[2831]: I0514 05:10:25.487887 2831 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 05:10:25.487927 kubelet[2831]: I0514 05:10:25.487896 2831 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 05:10:25.487927 kubelet[2831]: I0514 05:10:25.487914 2831 policy_none.go:49] "None policy: Start" May 14 05:10:25.488606 kubelet[2831]: I0514 05:10:25.488588 2831 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 05:10:25.488654 kubelet[2831]: I0514 05:10:25.488614 2831 state_mem.go:35] "Initializing new in-memory state store" May 14 05:10:25.488816 kubelet[2831]: I0514 05:10:25.488799 2831 state_mem.go:75] "Updated machine memory state" May 14 05:10:25.493240 kubelet[2831]: I0514 05:10:25.493159 2831 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 05:10:25.493393 kubelet[2831]: I0514 05:10:25.493341 2831 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 05:10:25.493488 kubelet[2831]: I0514 05:10:25.493469 2831 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 05:10:25.550273 kubelet[2831]: I0514 05:10:25.550244 2831 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 05:10:25.555627 kubelet[2831]: I0514 05:10:25.555589 2831 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 14 05:10:25.555778 kubelet[2831]: I0514 05:10:25.555677 2831 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 05:10:25.561099 kubelet[2831]: I0514 05:10:25.561056 2831 topology_manager.go:215] "Topology Admit Handler" podUID="4db57ba749f849dd4118c7f8007242ba" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 05:10:25.561151 kubelet[2831]: I0514 05:10:25.561141 2831 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 05:10:25.561200 kubelet[2831]: I0514 05:10:25.561188 2831 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 05:10:25.652158 kubelet[2831]: I0514 05:10:25.652034 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 05:10:25.652158 kubelet[2831]: I0514 05:10:25.652066 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4db57ba749f849dd4118c7f8007242ba-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4db57ba749f849dd4118c7f8007242ba\") " pod="kube-system/kube-apiserver-localhost" May 14 05:10:25.652158 kubelet[2831]: I0514 05:10:25.652083 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:10:25.652158 kubelet[2831]: I0514 05:10:25.652098 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:10:25.652158 kubelet[2831]: I0514 05:10:25.652115 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:10:25.652445 kubelet[2831]: I0514 05:10:25.652130 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:10:25.652445 kubelet[2831]: I0514 05:10:25.652144 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4db57ba749f849dd4118c7f8007242ba-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4db57ba749f849dd4118c7f8007242ba\") " pod="kube-system/kube-apiserver-localhost" May 14 05:10:25.652445 kubelet[2831]: I0514 05:10:25.652159 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4db57ba749f849dd4118c7f8007242ba-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4db57ba749f849dd4118c7f8007242ba\") " pod="kube-system/kube-apiserver-localhost" May 14 05:10:25.652445 kubelet[2831]: I0514 05:10:25.652175 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:10:26.437083 kubelet[2831]: I0514 05:10:26.437024 2831 apiserver.go:52] "Watching apiserver" May 14 05:10:26.450429 kubelet[2831]: I0514 05:10:26.450383 2831 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 05:10:26.482597 kubelet[2831]: E0514 05:10:26.481864 2831 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 14 05:10:26.483595 kubelet[2831]: E0514 05:10:26.483423 2831 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 05:10:26.498729 kubelet[2831]: I0514 05:10:26.495868 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.495850677 podStartE2EDuration="1.495850677s" podCreationTimestamp="2025-05-14 05:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:10:26.495245021 +0000 UTC m=+1.113167214" watchObservedRunningTime="2025-05-14 05:10:26.495850677 +0000 UTC m=+1.113772870" May 14 05:10:26.519804 kubelet[2831]: I0514 05:10:26.519731 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.519699801 podStartE2EDuration="1.519699801s" podCreationTimestamp="2025-05-14 05:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:10:26.505275355 +0000 UTC m=+1.123197548" watchObservedRunningTime="2025-05-14 05:10:26.519699801 +0000 UTC m=+1.137622004" May 14 05:10:26.536879 kubelet[2831]: I0514 05:10:26.536804 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.536763147 podStartE2EDuration="1.536763147s" podCreationTimestamp="2025-05-14 05:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:10:26.520325385 +0000 UTC m=+1.138247578" watchObservedRunningTime="2025-05-14 05:10:26.536763147 +0000 UTC m=+1.154685340" May 14 05:10:29.904745 sudo[1805]: pam_unix(sudo:session): session closed for user root May 14 05:10:29.905949 sshd[1804]: Connection closed by 10.0.0.1 port 53818 May 14 05:10:29.906381 sshd-session[1802]: pam_unix(sshd:session): session closed for user core May 14 05:10:29.911419 systemd[1]: sshd@6-10.0.0.84:22-10.0.0.1:53818.service: Deactivated successfully. May 14 05:10:29.913638 systemd[1]: session-7.scope: Deactivated successfully. May 14 05:10:29.913859 systemd[1]: session-7.scope: Consumed 4.549s CPU time, 235.4M memory peak. May 14 05:10:29.915001 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. May 14 05:10:29.916220 systemd-logind[1573]: Removed session 7. May 14 05:10:40.563242 kubelet[2831]: I0514 05:10:40.563191 2831 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 05:10:40.563744 kubelet[2831]: I0514 05:10:40.563734 2831 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 05:10:40.563781 containerd[1598]: time="2025-05-14T05:10:40.563575062Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 05:10:40.951151 kubelet[2831]: I0514 05:10:40.951100 2831 topology_manager.go:215] "Topology Admit Handler" podUID="f37b9066-afb7-45e6-856b-14491dcdac7d" podNamespace="kube-system" podName="kube-proxy-dss57" May 14 05:10:40.956891 systemd[1]: Created slice kubepods-besteffort-podf37b9066_afb7_45e6_856b_14491dcdac7d.slice - libcontainer container kubepods-besteffort-podf37b9066_afb7_45e6_856b_14491dcdac7d.slice. May 14 05:10:41.048824 kubelet[2831]: I0514 05:10:41.048776 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b86p\" (UniqueName: \"kubernetes.io/projected/f37b9066-afb7-45e6-856b-14491dcdac7d-kube-api-access-6b86p\") pod \"kube-proxy-dss57\" (UID: \"f37b9066-afb7-45e6-856b-14491dcdac7d\") " pod="kube-system/kube-proxy-dss57" May 14 05:10:41.048824 kubelet[2831]: I0514 05:10:41.048821 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f37b9066-afb7-45e6-856b-14491dcdac7d-xtables-lock\") pod \"kube-proxy-dss57\" (UID: \"f37b9066-afb7-45e6-856b-14491dcdac7d\") " pod="kube-system/kube-proxy-dss57" May 14 05:10:41.048976 kubelet[2831]: I0514 05:10:41.048839 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f37b9066-afb7-45e6-856b-14491dcdac7d-kube-proxy\") pod \"kube-proxy-dss57\" (UID: \"f37b9066-afb7-45e6-856b-14491dcdac7d\") " pod="kube-system/kube-proxy-dss57" May 14 05:10:41.048976 kubelet[2831]: I0514 05:10:41.048885 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f37b9066-afb7-45e6-856b-14491dcdac7d-lib-modules\") pod \"kube-proxy-dss57\" (UID: \"f37b9066-afb7-45e6-856b-14491dcdac7d\") " pod="kube-system/kube-proxy-dss57" May 14 05:10:41.268999 containerd[1598]: time="2025-05-14T05:10:41.268875937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dss57,Uid:f37b9066-afb7-45e6-856b-14491dcdac7d,Namespace:kube-system,Attempt:0,}" May 14 05:10:41.306749 containerd[1598]: time="2025-05-14T05:10:41.306691529Z" level=info msg="connecting to shim 0c5a59d2e41fcf1c1401f78bbb62de5d91775cc80175c77831be96fb1aceeb81" address="unix:///run/containerd/s/d36124d77277ea678d8baab73690949aad401fb9bdc411bbc232f5cb9ee4f952" namespace=k8s.io protocol=ttrpc version=3 May 14 05:10:41.361845 systemd[1]: Started cri-containerd-0c5a59d2e41fcf1c1401f78bbb62de5d91775cc80175c77831be96fb1aceeb81.scope - libcontainer container 0c5a59d2e41fcf1c1401f78bbb62de5d91775cc80175c77831be96fb1aceeb81. May 14 05:10:41.385827 containerd[1598]: time="2025-05-14T05:10:41.385788757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dss57,Uid:f37b9066-afb7-45e6-856b-14491dcdac7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c5a59d2e41fcf1c1401f78bbb62de5d91775cc80175c77831be96fb1aceeb81\"" May 14 05:10:41.388078 containerd[1598]: time="2025-05-14T05:10:41.388055573Z" level=info msg="CreateContainer within sandbox \"0c5a59d2e41fcf1c1401f78bbb62de5d91775cc80175c77831be96fb1aceeb81\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 05:10:41.399825 containerd[1598]: time="2025-05-14T05:10:41.399755090Z" level=info msg="Container ef37186888cd8fbdf4251d35d6fa4770296939a683993f0911b9ebfa88d08082: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:41.403096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376300161.mount: Deactivated successfully. May 14 05:10:41.407658 containerd[1598]: time="2025-05-14T05:10:41.407617457Z" level=info msg="CreateContainer within sandbox \"0c5a59d2e41fcf1c1401f78bbb62de5d91775cc80175c77831be96fb1aceeb81\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef37186888cd8fbdf4251d35d6fa4770296939a683993f0911b9ebfa88d08082\"" May 14 05:10:41.408180 containerd[1598]: time="2025-05-14T05:10:41.408152012Z" level=info msg="StartContainer for \"ef37186888cd8fbdf4251d35d6fa4770296939a683993f0911b9ebfa88d08082\"" May 14 05:10:41.409799 containerd[1598]: time="2025-05-14T05:10:41.409732595Z" level=info msg="connecting to shim ef37186888cd8fbdf4251d35d6fa4770296939a683993f0911b9ebfa88d08082" address="unix:///run/containerd/s/d36124d77277ea678d8baab73690949aad401fb9bdc411bbc232f5cb9ee4f952" protocol=ttrpc version=3 May 14 05:10:41.432848 systemd[1]: Started cri-containerd-ef37186888cd8fbdf4251d35d6fa4770296939a683993f0911b9ebfa88d08082.scope - libcontainer container ef37186888cd8fbdf4251d35d6fa4770296939a683993f0911b9ebfa88d08082. May 14 05:10:41.476200 containerd[1598]: time="2025-05-14T05:10:41.476162068Z" level=info msg="StartContainer for \"ef37186888cd8fbdf4251d35d6fa4770296939a683993f0911b9ebfa88d08082\" returns successfully" May 14 05:10:41.518545 kubelet[2831]: I0514 05:10:41.518397 2831 topology_manager.go:215] "Topology Admit Handler" podUID="4c815800-80aa-4fbc-b27f-49acfa48fc7e" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-trtd2" May 14 05:10:41.529615 systemd[1]: Created slice kubepods-besteffort-pod4c815800_80aa_4fbc_b27f_49acfa48fc7e.slice - libcontainer container kubepods-besteffort-pod4c815800_80aa_4fbc_b27f_49acfa48fc7e.slice. May 14 05:10:41.533726 kubelet[2831]: I0514 05:10:41.533666 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dss57" podStartSLOduration=1.533287684 podStartE2EDuration="1.533287684s" podCreationTimestamp="2025-05-14 05:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:10:41.525594879 +0000 UTC m=+16.143517062" watchObservedRunningTime="2025-05-14 05:10:41.533287684 +0000 UTC m=+16.151209867" May 14 05:10:41.553038 kubelet[2831]: I0514 05:10:41.552999 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b27fn\" (UniqueName: \"kubernetes.io/projected/4c815800-80aa-4fbc-b27f-49acfa48fc7e-kube-api-access-b27fn\") pod \"tigera-operator-797db67f8-trtd2\" (UID: \"4c815800-80aa-4fbc-b27f-49acfa48fc7e\") " pod="tigera-operator/tigera-operator-797db67f8-trtd2" May 14 05:10:41.553271 kubelet[2831]: I0514 05:10:41.553248 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4c815800-80aa-4fbc-b27f-49acfa48fc7e-var-lib-calico\") pod \"tigera-operator-797db67f8-trtd2\" (UID: \"4c815800-80aa-4fbc-b27f-49acfa48fc7e\") " pod="tigera-operator/tigera-operator-797db67f8-trtd2" May 14 05:10:41.833116 containerd[1598]: time="2025-05-14T05:10:41.832998131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-trtd2,Uid:4c815800-80aa-4fbc-b27f-49acfa48fc7e,Namespace:tigera-operator,Attempt:0,}" May 14 05:10:41.852263 containerd[1598]: time="2025-05-14T05:10:41.852221151Z" level=info msg="connecting to shim c81c7ca06ec5e3e38777f71b71e574a6d900258960cbc48faaafe21f20c38af5" address="unix:///run/containerd/s/2c9f7c50e18412aeae7f9e84204b8c5402a4a2aaf3dbeffa7d4bf58586780786" namespace=k8s.io protocol=ttrpc version=3 May 14 05:10:41.886843 systemd[1]: Started cri-containerd-c81c7ca06ec5e3e38777f71b71e574a6d900258960cbc48faaafe21f20c38af5.scope - libcontainer container c81c7ca06ec5e3e38777f71b71e574a6d900258960cbc48faaafe21f20c38af5. May 14 05:10:41.925885 containerd[1598]: time="2025-05-14T05:10:41.925840432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-trtd2,Uid:4c815800-80aa-4fbc-b27f-49acfa48fc7e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c81c7ca06ec5e3e38777f71b71e574a6d900258960cbc48faaafe21f20c38af5\"" May 14 05:10:41.927412 containerd[1598]: time="2025-05-14T05:10:41.927206598Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 05:10:42.604550 update_engine[1577]: I20250514 05:10:42.604439 1577 update_attempter.cc:509] Updating boot flags... May 14 05:10:44.071520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount848577508.mount: Deactivated successfully. May 14 05:10:44.406322 containerd[1598]: time="2025-05-14T05:10:44.406271614Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:44.407300 containerd[1598]: time="2025-05-14T05:10:44.407277962Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 14 05:10:44.408828 containerd[1598]: time="2025-05-14T05:10:44.408762076Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:44.410962 containerd[1598]: time="2025-05-14T05:10:44.410903024Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:44.411471 containerd[1598]: time="2025-05-14T05:10:44.411424032Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.484186415s" May 14 05:10:44.411471 containerd[1598]: time="2025-05-14T05:10:44.411466291Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 14 05:10:44.413352 containerd[1598]: time="2025-05-14T05:10:44.413317410Z" level=info msg="CreateContainer within sandbox \"c81c7ca06ec5e3e38777f71b71e574a6d900258960cbc48faaafe21f20c38af5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 05:10:44.421552 containerd[1598]: time="2025-05-14T05:10:44.421515610Z" level=info msg="Container ce02903f9131cddd842c4e58aa2fbd3c474f0ca15568652f2ad833b824340760: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:44.427441 containerd[1598]: time="2025-05-14T05:10:44.427390096Z" level=info msg="CreateContainer within sandbox \"c81c7ca06ec5e3e38777f71b71e574a6d900258960cbc48faaafe21f20c38af5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ce02903f9131cddd842c4e58aa2fbd3c474f0ca15568652f2ad833b824340760\"" May 14 05:10:44.427946 containerd[1598]: time="2025-05-14T05:10:44.427879814Z" level=info msg="StartContainer for \"ce02903f9131cddd842c4e58aa2fbd3c474f0ca15568652f2ad833b824340760\"" May 14 05:10:44.428902 containerd[1598]: time="2025-05-14T05:10:44.428865372Z" level=info msg="connecting to shim ce02903f9131cddd842c4e58aa2fbd3c474f0ca15568652f2ad833b824340760" address="unix:///run/containerd/s/2c9f7c50e18412aeae7f9e84204b8c5402a4a2aaf3dbeffa7d4bf58586780786" protocol=ttrpc version=3 May 14 05:10:44.452968 systemd[1]: Started cri-containerd-ce02903f9131cddd842c4e58aa2fbd3c474f0ca15568652f2ad833b824340760.scope - libcontainer container ce02903f9131cddd842c4e58aa2fbd3c474f0ca15568652f2ad833b824340760. May 14 05:10:44.486942 containerd[1598]: time="2025-05-14T05:10:44.486907695Z" level=info msg="StartContainer for \"ce02903f9131cddd842c4e58aa2fbd3c474f0ca15568652f2ad833b824340760\" returns successfully" May 14 05:10:44.523391 kubelet[2831]: I0514 05:10:44.522772 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-trtd2" podStartSLOduration=1.03739338 podStartE2EDuration="3.522755265s" podCreationTimestamp="2025-05-14 05:10:41 +0000 UTC" firstStartedPulling="2025-05-14 05:10:41.926835232 +0000 UTC m=+16.544757425" lastFinishedPulling="2025-05-14 05:10:44.412197117 +0000 UTC m=+19.030119310" observedRunningTime="2025-05-14 05:10:44.522369745 +0000 UTC m=+19.140291938" watchObservedRunningTime="2025-05-14 05:10:44.522755265 +0000 UTC m=+19.140677448" May 14 05:10:47.512223 kubelet[2831]: I0514 05:10:47.512153 2831 topology_manager.go:215] "Topology Admit Handler" podUID="3ec6c052-4c37-45d2-94e6-d3178f9b2f81" podNamespace="calico-system" podName="calico-typha-7bc5459cbd-krcvp" May 14 05:10:47.527475 systemd[1]: Created slice kubepods-besteffort-pod3ec6c052_4c37_45d2_94e6_d3178f9b2f81.slice - libcontainer container kubepods-besteffort-pod3ec6c052_4c37_45d2_94e6_d3178f9b2f81.slice. May 14 05:10:47.584721 kubelet[2831]: I0514 05:10:47.584657 2831 topology_manager.go:215] "Topology Admit Handler" podUID="ed1f9fc2-ec51-4f79-910d-6bea91cb775f" podNamespace="calico-system" podName="calico-node-jkvhl" May 14 05:10:47.594097 systemd[1]: Created slice kubepods-besteffort-poded1f9fc2_ec51_4f79_910d_6bea91cb775f.slice - libcontainer container kubepods-besteffort-poded1f9fc2_ec51_4f79_910d_6bea91cb775f.slice. May 14 05:10:47.596855 kubelet[2831]: I0514 05:10:47.596829 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3ec6c052-4c37-45d2-94e6-d3178f9b2f81-typha-certs\") pod \"calico-typha-7bc5459cbd-krcvp\" (UID: \"3ec6c052-4c37-45d2-94e6-d3178f9b2f81\") " pod="calico-system/calico-typha-7bc5459cbd-krcvp" May 14 05:10:47.597027 kubelet[2831]: I0514 05:10:47.597013 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed1f9fc2-ec51-4f79-910d-6bea91cb775f-tigera-ca-bundle\") pod \"calico-node-jkvhl\" (UID: \"ed1f9fc2-ec51-4f79-910d-6bea91cb775f\") " pod="calico-system/calico-node-jkvhl" May 14 05:10:47.597139 kubelet[2831]: I0514 05:10:47.597127 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ed1f9fc2-ec51-4f79-910d-6bea91cb775f-cni-net-dir\") pod \"calico-node-jkvhl\" (UID: \"ed1f9fc2-ec51-4f79-910d-6bea91cb775f\") " pod="calico-system/calico-node-jkvhl" May 14 05:10:47.597571 kubelet[2831]: I0514 05:10:47.597556 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed1f9fc2-ec51-4f79-910d-6bea91cb775f-lib-modules\") pod \"calico-node-jkvhl\" (UID: \"ed1f9fc2-ec51-4f79-910d-6bea91cb775f\") " pod="calico-system/calico-node-jkvhl" May 14 05:10:47.597648 kubelet[2831]: I0514 05:10:47.597634 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n2jz\" (UniqueName: \"kubernetes.io/projected/ed1f9fc2-ec51-4f79-910d-6bea91cb775f-kube-api-access-8n2jz\") pod \"calico-node-jkvhl\" (UID: \"ed1f9fc2-ec51-4f79-910d-6bea91cb775f\") " pod="calico-system/calico-node-jkvhl" May 14 05:10:47.597728 kubelet[2831]: I0514 05:10:47.597717 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ed1f9fc2-ec51-4f79-910d-6bea91cb775f-policysync\") pod \"calico-node-jkvhl\" (UID: \"ed1f9fc2-ec51-4f79-910d-6bea91cb775f\") " pod="calico-system/calico-node-jkvhl" May 14 05:10:47.597792 kubelet[2831]: I0514 05:10:47.597781 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ed1f9fc2-ec51-4f79-910d-6bea91cb775f-node-certs\") pod \"calico-node-jkvhl\" (UID: \"ed1f9fc2-ec51-4f79-910d-6bea91cb775f\") " pod="calico-system/calico-node-jkvhl" May 14 05:10:47.597848 kubelet[2831]: I0514 05:10:47.597838 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ed1f9fc2-ec51-4f79-910d-6bea91cb775f-var-lib-calico\") pod \"calico-node-jkvhl\" (UID: \"ed1f9fc2-ec51-4f79-910d-6bea91cb775f\") " pod="calico-system/calico-node-jkvhl" May 14 05:10:47.597903 kubelet[2831]: I0514 05:10:47.597893 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ed1f9fc2-ec51-4f79-910d-6bea91cb775f-cni-bin-dir\") pod \"calico-node-jkvhl\" (UID: \"ed1f9fc2-ec51-4f79-910d-6bea91cb775f\") " pod="calico-system/calico-node-jkvhl" May 14 05:10:47.598108 kubelet[2831]: I0514 05:10:47.597948 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ed1f9fc2-ec51-4f79-910d-6bea91cb775f-flexvol-driver-host\") pod \"calico-node-jkvhl\" (UID: \"ed1f9fc2-ec51-4f79-910d-6bea91cb775f\") " pod="calico-system/calico-node-jkvhl" May 14 05:10:47.598108 kubelet[2831]: I0514 05:10:47.597965 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ed1f9fc2-ec51-4f79-910d-6bea91cb775f-cni-log-dir\") pod \"calico-node-jkvhl\" (UID: \"ed1f9fc2-ec51-4f79-910d-6bea91cb775f\") " pod="calico-system/calico-node-jkvhl" May 14 05:10:47.598108 kubelet[2831]: I0514 05:10:47.597978 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ed1f9fc2-ec51-4f79-910d-6bea91cb775f-var-run-calico\") pod \"calico-node-jkvhl\" (UID: \"ed1f9fc2-ec51-4f79-910d-6bea91cb775f\") " pod="calico-system/calico-node-jkvhl" May 14 05:10:47.598108 kubelet[2831]: I0514 05:10:47.597992 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsdnp\" (UniqueName: \"kubernetes.io/projected/3ec6c052-4c37-45d2-94e6-d3178f9b2f81-kube-api-access-nsdnp\") pod \"calico-typha-7bc5459cbd-krcvp\" (UID: \"3ec6c052-4c37-45d2-94e6-d3178f9b2f81\") " pod="calico-system/calico-typha-7bc5459cbd-krcvp" May 14 05:10:47.598108 kubelet[2831]: I0514 05:10:47.598009 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed1f9fc2-ec51-4f79-910d-6bea91cb775f-xtables-lock\") pod \"calico-node-jkvhl\" (UID: \"ed1f9fc2-ec51-4f79-910d-6bea91cb775f\") " pod="calico-system/calico-node-jkvhl" May 14 05:10:47.598224 kubelet[2831]: I0514 05:10:47.598023 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ec6c052-4c37-45d2-94e6-d3178f9b2f81-tigera-ca-bundle\") pod \"calico-typha-7bc5459cbd-krcvp\" (UID: \"3ec6c052-4c37-45d2-94e6-d3178f9b2f81\") " pod="calico-system/calico-typha-7bc5459cbd-krcvp" May 14 05:10:47.699544 kubelet[2831]: E0514 05:10:47.699519 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:47.699780 kubelet[2831]: W0514 05:10:47.699673 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:47.699780 kubelet[2831]: E0514 05:10:47.699693 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:47.699923 kubelet[2831]: E0514 05:10:47.699912 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:47.699978 kubelet[2831]: W0514 05:10:47.699968 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:47.700035 kubelet[2831]: E0514 05:10:47.700025 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:47.700246 kubelet[2831]: E0514 05:10:47.700224 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:47.700246 kubelet[2831]: W0514 05:10:47.700234 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:47.700337 kubelet[2831]: E0514 05:10:47.700320 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:47.700639 kubelet[2831]: E0514 05:10:47.700616 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:47.700639 kubelet[2831]: W0514 05:10:47.700627 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:47.700733 kubelet[2831]: E0514 05:10:47.700722 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:47.700943 kubelet[2831]: E0514 05:10:47.700921 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:47.700943 kubelet[2831]: W0514 05:10:47.700931 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:47.701052 kubelet[2831]: E0514 05:10:47.701015 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:47.702731 kubelet[2831]: E0514 05:10:47.701248 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:47.702731 kubelet[2831]: W0514 05:10:47.701256 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:47.702731 kubelet[2831]: E0514 05:10:47.701275 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:47.702731 kubelet[2831]: E0514 05:10:47.701433 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:47.702731 kubelet[2831]: W0514 05:10:47.701439 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:47.702731 kubelet[2831]: E0514 05:10:47.701448 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:47.702731 kubelet[2831]: E0514 05:10:47.701593 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:47.702731 kubelet[2831]: W0514 05:10:47.701600 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:47.702731 kubelet[2831]: E0514 05:10:47.701606 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:47.702731 kubelet[2831]: E0514 05:10:47.701813 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:47.702954 kubelet[2831]: W0514 05:10:47.701820 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:47.702954 kubelet[2831]: E0514 05:10:47.701827 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:47.702954 kubelet[2831]: E0514 05:10:47.702614 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:47.702954 kubelet[2831]: W0514 05:10:47.702622 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:47.702954 kubelet[2831]: E0514 05:10:47.702630 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:47.703919 kubelet[2831]: E0514 05:10:47.703878 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:47.703954 kubelet[2831]: W0514 05:10:47.703919 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:47.703954 kubelet[2831]: E0514 05:10:47.703946 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:47.766245 kubelet[2831]: E0514 05:10:47.766118 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:47.766245 kubelet[2831]: W0514 05:10:47.766137 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:47.766245 kubelet[2831]: E0514 05:10:47.766155 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:47.767009 kubelet[2831]: E0514 05:10:47.766972 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:47.767009 kubelet[2831]: W0514 05:10:47.766983 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:47.767009 kubelet[2831]: E0514 05:10:47.766992 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:47.831162 containerd[1598]: time="2025-05-14T05:10:47.831113152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bc5459cbd-krcvp,Uid:3ec6c052-4c37-45d2-94e6-d3178f9b2f81,Namespace:calico-system,Attempt:0,}" May 14 05:10:47.899180 containerd[1598]: time="2025-05-14T05:10:47.899130685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jkvhl,Uid:ed1f9fc2-ec51-4f79-910d-6bea91cb775f,Namespace:calico-system,Attempt:0,}" May 14 05:10:48.129505 kubelet[2831]: I0514 05:10:48.129451 2831 topology_manager.go:215] "Topology Admit Handler" podUID="fcc7a085-0eea-465f-a625-a16939de0db1" podNamespace="calico-system" podName="csi-node-driver-7w57m" May 14 05:10:48.130402 kubelet[2831]: E0514 05:10:48.130155 2831 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7w57m" podUID="fcc7a085-0eea-465f-a625-a16939de0db1" May 14 05:10:48.192349 kubelet[2831]: E0514 05:10:48.192302 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.192349 kubelet[2831]: W0514 05:10:48.192329 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.192349 kubelet[2831]: E0514 05:10:48.192348 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.192518 kubelet[2831]: E0514 05:10:48.192510 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.192545 kubelet[2831]: W0514 05:10:48.192523 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.192545 kubelet[2831]: E0514 05:10:48.192535 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.192790 kubelet[2831]: E0514 05:10:48.192774 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.192790 kubelet[2831]: W0514 05:10:48.192786 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.192844 kubelet[2831]: E0514 05:10:48.192795 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.192966 kubelet[2831]: E0514 05:10:48.192948 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.192966 kubelet[2831]: W0514 05:10:48.192961 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.193047 kubelet[2831]: E0514 05:10:48.192971 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.193157 kubelet[2831]: E0514 05:10:48.193141 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.193157 kubelet[2831]: W0514 05:10:48.193153 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.193201 kubelet[2831]: E0514 05:10:48.193162 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.193427 kubelet[2831]: E0514 05:10:48.193403 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.193427 kubelet[2831]: W0514 05:10:48.193417 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.193470 kubelet[2831]: E0514 05:10:48.193427 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.193614 kubelet[2831]: E0514 05:10:48.193598 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.193614 kubelet[2831]: W0514 05:10:48.193610 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.193657 kubelet[2831]: E0514 05:10:48.193619 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.193808 kubelet[2831]: E0514 05:10:48.193792 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.193808 kubelet[2831]: W0514 05:10:48.193805 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.193860 kubelet[2831]: E0514 05:10:48.193815 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.194010 kubelet[2831]: E0514 05:10:48.193994 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.194010 kubelet[2831]: W0514 05:10:48.194005 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.194074 kubelet[2831]: E0514 05:10:48.194014 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.194177 kubelet[2831]: E0514 05:10:48.194161 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.194177 kubelet[2831]: W0514 05:10:48.194173 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.194237 kubelet[2831]: E0514 05:10:48.194182 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.194343 kubelet[2831]: E0514 05:10:48.194328 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.194343 kubelet[2831]: W0514 05:10:48.194340 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.194388 kubelet[2831]: E0514 05:10:48.194349 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.194506 kubelet[2831]: E0514 05:10:48.194492 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.194506 kubelet[2831]: W0514 05:10:48.194503 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.194570 kubelet[2831]: E0514 05:10:48.194511 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.194670 kubelet[2831]: E0514 05:10:48.194654 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.194670 kubelet[2831]: W0514 05:10:48.194666 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.194733 kubelet[2831]: E0514 05:10:48.194674 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.194842 kubelet[2831]: E0514 05:10:48.194827 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.194873 kubelet[2831]: W0514 05:10:48.194844 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.194873 kubelet[2831]: E0514 05:10:48.194853 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.195003 kubelet[2831]: E0514 05:10:48.194987 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.195003 kubelet[2831]: W0514 05:10:48.194998 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.195043 kubelet[2831]: E0514 05:10:48.195005 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.195185 kubelet[2831]: E0514 05:10:48.195170 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.195185 kubelet[2831]: W0514 05:10:48.195182 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.195239 kubelet[2831]: E0514 05:10:48.195191 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.195400 kubelet[2831]: E0514 05:10:48.195375 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.195400 kubelet[2831]: W0514 05:10:48.195386 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.195400 kubelet[2831]: E0514 05:10:48.195393 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.195552 kubelet[2831]: E0514 05:10:48.195538 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.195552 kubelet[2831]: W0514 05:10:48.195547 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.195607 kubelet[2831]: E0514 05:10:48.195554 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.195725 kubelet[2831]: E0514 05:10:48.195697 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.195725 kubelet[2831]: W0514 05:10:48.195722 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.195772 kubelet[2831]: E0514 05:10:48.195730 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.195885 kubelet[2831]: E0514 05:10:48.195871 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.195885 kubelet[2831]: W0514 05:10:48.195880 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.195928 kubelet[2831]: E0514 05:10:48.195887 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.201154 kubelet[2831]: E0514 05:10:48.201119 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.201154 kubelet[2831]: W0514 05:10:48.201131 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.201154 kubelet[2831]: E0514 05:10:48.201141 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.201244 kubelet[2831]: I0514 05:10:48.201165 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fcc7a085-0eea-465f-a625-a16939de0db1-varrun\") pod \"csi-node-driver-7w57m\" (UID: \"fcc7a085-0eea-465f-a625-a16939de0db1\") " pod="calico-system/csi-node-driver-7w57m" May 14 05:10:48.201345 kubelet[2831]: E0514 05:10:48.201315 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.201345 kubelet[2831]: W0514 05:10:48.201334 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.201404 kubelet[2831]: E0514 05:10:48.201346 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.201404 kubelet[2831]: I0514 05:10:48.201359 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fcc7a085-0eea-465f-a625-a16939de0db1-registration-dir\") pod \"csi-node-driver-7w57m\" (UID: \"fcc7a085-0eea-465f-a625-a16939de0db1\") " pod="calico-system/csi-node-driver-7w57m" May 14 05:10:48.201555 kubelet[2831]: E0514 05:10:48.201531 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.201555 kubelet[2831]: W0514 05:10:48.201545 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.201555 kubelet[2831]: E0514 05:10:48.201559 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.201779 kubelet[2831]: E0514 05:10:48.201768 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.201779 kubelet[2831]: W0514 05:10:48.201776 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.201822 kubelet[2831]: E0514 05:10:48.201788 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.201965 kubelet[2831]: E0514 05:10:48.201948 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.201965 kubelet[2831]: W0514 05:10:48.201957 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.202018 kubelet[2831]: E0514 05:10:48.201968 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.202018 kubelet[2831]: I0514 05:10:48.201981 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fcc7a085-0eea-465f-a625-a16939de0db1-socket-dir\") pod \"csi-node-driver-7w57m\" (UID: \"fcc7a085-0eea-465f-a625-a16939de0db1\") " pod="calico-system/csi-node-driver-7w57m" May 14 05:10:48.202152 kubelet[2831]: E0514 05:10:48.202141 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.202152 kubelet[2831]: W0514 05:10:48.202150 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.202202 kubelet[2831]: E0514 05:10:48.202163 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.202202 kubelet[2831]: I0514 05:10:48.202176 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlvzq\" (UniqueName: \"kubernetes.io/projected/fcc7a085-0eea-465f-a625-a16939de0db1-kube-api-access-mlvzq\") pod \"csi-node-driver-7w57m\" (UID: \"fcc7a085-0eea-465f-a625-a16939de0db1\") " pod="calico-system/csi-node-driver-7w57m" May 14 05:10:48.202363 kubelet[2831]: E0514 05:10:48.202346 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.202363 kubelet[2831]: W0514 05:10:48.202358 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.202430 kubelet[2831]: E0514 05:10:48.202373 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.202430 kubelet[2831]: I0514 05:10:48.202386 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fcc7a085-0eea-465f-a625-a16939de0db1-kubelet-dir\") pod \"csi-node-driver-7w57m\" (UID: \"fcc7a085-0eea-465f-a625-a16939de0db1\") " pod="calico-system/csi-node-driver-7w57m" May 14 05:10:48.202562 kubelet[2831]: E0514 05:10:48.202548 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.202562 kubelet[2831]: W0514 05:10:48.202559 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.202684 kubelet[2831]: E0514 05:10:48.202610 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.202732 kubelet[2831]: E0514 05:10:48.202720 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.202732 kubelet[2831]: W0514 05:10:48.202728 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.202772 kubelet[2831]: E0514 05:10:48.202754 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.202892 kubelet[2831]: E0514 05:10:48.202880 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.202892 kubelet[2831]: W0514 05:10:48.202889 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.202939 kubelet[2831]: E0514 05:10:48.202901 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.203060 kubelet[2831]: E0514 05:10:48.203049 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.203060 kubelet[2831]: W0514 05:10:48.203058 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.203136 kubelet[2831]: E0514 05:10:48.203069 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.203222 kubelet[2831]: E0514 05:10:48.203212 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.203222 kubelet[2831]: W0514 05:10:48.203220 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.203275 kubelet[2831]: E0514 05:10:48.203228 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.203399 kubelet[2831]: E0514 05:10:48.203388 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.203399 kubelet[2831]: W0514 05:10:48.203396 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.203447 kubelet[2831]: E0514 05:10:48.203404 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.203557 kubelet[2831]: E0514 05:10:48.203546 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.203557 kubelet[2831]: W0514 05:10:48.203554 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.203599 kubelet[2831]: E0514 05:10:48.203561 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.203745 kubelet[2831]: E0514 05:10:48.203734 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.203745 kubelet[2831]: W0514 05:10:48.203742 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.203801 kubelet[2831]: E0514 05:10:48.203749 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.303105 kubelet[2831]: E0514 05:10:48.303058 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.303105 kubelet[2831]: W0514 05:10:48.303092 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.303105 kubelet[2831]: E0514 05:10:48.303117 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.304535 kubelet[2831]: E0514 05:10:48.303400 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.304535 kubelet[2831]: W0514 05:10:48.303414 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.304535 kubelet[2831]: E0514 05:10:48.303423 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.304535 kubelet[2831]: E0514 05:10:48.303613 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.304535 kubelet[2831]: W0514 05:10:48.303621 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.304535 kubelet[2831]: E0514 05:10:48.303642 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.304535 kubelet[2831]: E0514 05:10:48.303818 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.304535 kubelet[2831]: W0514 05:10:48.303825 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.304535 kubelet[2831]: E0514 05:10:48.303835 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.304535 kubelet[2831]: E0514 05:10:48.303965 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.304821 kubelet[2831]: W0514 05:10:48.303971 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.304821 kubelet[2831]: E0514 05:10:48.303978 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.304821 kubelet[2831]: E0514 05:10:48.304137 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.304821 kubelet[2831]: W0514 05:10:48.304142 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.304821 kubelet[2831]: E0514 05:10:48.304150 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.304821 kubelet[2831]: E0514 05:10:48.304262 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.304821 kubelet[2831]: W0514 05:10:48.304268 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.304821 kubelet[2831]: E0514 05:10:48.304275 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.304821 kubelet[2831]: E0514 05:10:48.304437 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.304821 kubelet[2831]: W0514 05:10:48.304444 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.305069 kubelet[2831]: E0514 05:10:48.304451 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.305256 kubelet[2831]: E0514 05:10:48.305117 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.305256 kubelet[2831]: W0514 05:10:48.305139 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.305256 kubelet[2831]: E0514 05:10:48.305170 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.306840 kubelet[2831]: E0514 05:10:48.305491 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.306840 kubelet[2831]: W0514 05:10:48.305501 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.306840 kubelet[2831]: E0514 05:10:48.305640 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.306840 kubelet[2831]: E0514 05:10:48.305815 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.306840 kubelet[2831]: W0514 05:10:48.305822 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.306840 kubelet[2831]: E0514 05:10:48.306012 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.306840 kubelet[2831]: E0514 05:10:48.306080 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.306840 kubelet[2831]: W0514 05:10:48.306086 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.306840 kubelet[2831]: E0514 05:10:48.306145 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.306840 kubelet[2831]: E0514 05:10:48.306353 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.307170 kubelet[2831]: W0514 05:10:48.306360 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.307170 kubelet[2831]: E0514 05:10:48.306432 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.307170 kubelet[2831]: E0514 05:10:48.306622 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.307170 kubelet[2831]: W0514 05:10:48.306630 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.307170 kubelet[2831]: E0514 05:10:48.306691 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.307170 kubelet[2831]: E0514 05:10:48.306886 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.307170 kubelet[2831]: W0514 05:10:48.306903 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.307170 kubelet[2831]: E0514 05:10:48.306933 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.307327 kubelet[2831]: E0514 05:10:48.307175 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.307327 kubelet[2831]: W0514 05:10:48.307182 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.307327 kubelet[2831]: E0514 05:10:48.307212 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.307472 kubelet[2831]: E0514 05:10:48.307409 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.307472 kubelet[2831]: W0514 05:10:48.307419 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.307472 kubelet[2831]: E0514 05:10:48.307461 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.308348 kubelet[2831]: E0514 05:10:48.307639 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.308348 kubelet[2831]: W0514 05:10:48.307648 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.308348 kubelet[2831]: E0514 05:10:48.307656 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.308348 kubelet[2831]: E0514 05:10:48.307934 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.308348 kubelet[2831]: W0514 05:10:48.307941 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.308348 kubelet[2831]: E0514 05:10:48.307954 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.308348 kubelet[2831]: E0514 05:10:48.308171 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.308348 kubelet[2831]: W0514 05:10:48.308178 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.308348 kubelet[2831]: E0514 05:10:48.308244 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.309590 kubelet[2831]: E0514 05:10:48.308430 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.309590 kubelet[2831]: W0514 05:10:48.308437 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.309590 kubelet[2831]: E0514 05:10:48.308519 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.309590 kubelet[2831]: E0514 05:10:48.308676 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.309590 kubelet[2831]: W0514 05:10:48.308683 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.309590 kubelet[2831]: E0514 05:10:48.308699 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.309590 kubelet[2831]: E0514 05:10:48.308945 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.309590 kubelet[2831]: W0514 05:10:48.308963 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.309590 kubelet[2831]: E0514 05:10:48.308979 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.309590 kubelet[2831]: E0514 05:10:48.309128 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.310244 kubelet[2831]: W0514 05:10:48.309136 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.310244 kubelet[2831]: E0514 05:10:48.309144 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.310244 kubelet[2831]: E0514 05:10:48.309290 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.310244 kubelet[2831]: W0514 05:10:48.309295 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.310244 kubelet[2831]: E0514 05:10:48.309302 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.327131 kubelet[2831]: E0514 05:10:48.327098 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:48.327131 kubelet[2831]: W0514 05:10:48.327120 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:48.327131 kubelet[2831]: E0514 05:10:48.327140 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:48.663299 containerd[1598]: time="2025-05-14T05:10:48.663254999Z" level=info msg="connecting to shim 8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2" address="unix:///run/containerd/s/acf88d78f61dce4070d0d2983e99d7282df32de635566d8c2b9ab93ecb4f4b29" namespace=k8s.io protocol=ttrpc version=3 May 14 05:10:48.685827 systemd[1]: Started cri-containerd-8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2.scope - libcontainer container 8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2. May 14 05:10:48.794318 containerd[1598]: time="2025-05-14T05:10:48.794271005Z" level=info msg="connecting to shim 9150f6ea102a29eaf8f2d6b6dda0255f93cbaa9525299117d854a6ee4ce3eaec" address="unix:///run/containerd/s/ea78cd50e093efa2314659b57e043fe016a4b84426296e3e7faceb1f481446a5" namespace=k8s.io protocol=ttrpc version=3 May 14 05:10:48.830236 containerd[1598]: time="2025-05-14T05:10:48.830190611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bc5459cbd-krcvp,Uid:3ec6c052-4c37-45d2-94e6-d3178f9b2f81,Namespace:calico-system,Attempt:0,} returns sandbox id \"8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2\"" May 14 05:10:48.831967 containerd[1598]: time="2025-05-14T05:10:48.831931433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 05:10:48.837841 systemd[1]: Started cri-containerd-9150f6ea102a29eaf8f2d6b6dda0255f93cbaa9525299117d854a6ee4ce3eaec.scope - libcontainer container 9150f6ea102a29eaf8f2d6b6dda0255f93cbaa9525299117d854a6ee4ce3eaec. May 14 05:10:48.952021 containerd[1598]: time="2025-05-14T05:10:48.951874279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jkvhl,Uid:ed1f9fc2-ec51-4f79-910d-6bea91cb775f,Namespace:calico-system,Attempt:0,} returns sandbox id \"9150f6ea102a29eaf8f2d6b6dda0255f93cbaa9525299117d854a6ee4ce3eaec\"" May 14 05:10:49.461246 kubelet[2831]: E0514 05:10:49.461200 2831 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7w57m" podUID="fcc7a085-0eea-465f-a625-a16939de0db1" May 14 05:10:51.303029 containerd[1598]: time="2025-05-14T05:10:51.302950438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:51.303913 containerd[1598]: time="2025-05-14T05:10:51.303890202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 14 05:10:51.304994 containerd[1598]: time="2025-05-14T05:10:51.304939914Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:51.307156 containerd[1598]: time="2025-05-14T05:10:51.307116565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:51.307675 containerd[1598]: time="2025-05-14T05:10:51.307636847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.475645602s" May 14 05:10:51.307675 containerd[1598]: time="2025-05-14T05:10:51.307671062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 14 05:10:51.308583 containerd[1598]: time="2025-05-14T05:10:51.308446275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 05:10:51.315377 containerd[1598]: time="2025-05-14T05:10:51.315303033Z" level=info msg="CreateContainer within sandbox \"8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 05:10:51.323286 containerd[1598]: time="2025-05-14T05:10:51.323251032Z" level=info msg="Container 6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:51.332124 containerd[1598]: time="2025-05-14T05:10:51.332083300Z" level=info msg="CreateContainer within sandbox \"8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\"" May 14 05:10:51.332644 containerd[1598]: time="2025-05-14T05:10:51.332591781Z" level=info msg="StartContainer for \"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\"" May 14 05:10:51.333757 containerd[1598]: time="2025-05-14T05:10:51.333724028Z" level=info msg="connecting to shim 6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab" address="unix:///run/containerd/s/acf88d78f61dce4070d0d2983e99d7282df32de635566d8c2b9ab93ecb4f4b29" protocol=ttrpc version=3 May 14 05:10:51.362820 systemd[1]: Started cri-containerd-6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab.scope - libcontainer container 6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab. May 14 05:10:51.437899 containerd[1598]: time="2025-05-14T05:10:51.437855886Z" level=info msg="StartContainer for \"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\" returns successfully" May 14 05:10:51.461569 kubelet[2831]: E0514 05:10:51.461517 2831 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7w57m" podUID="fcc7a085-0eea-465f-a625-a16939de0db1" May 14 05:10:51.620910 kubelet[2831]: E0514 05:10:51.620884 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.620910 kubelet[2831]: W0514 05:10:51.620905 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.621073 kubelet[2831]: E0514 05:10:51.620926 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.621118 kubelet[2831]: E0514 05:10:51.621101 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.621118 kubelet[2831]: W0514 05:10:51.621114 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.621166 kubelet[2831]: E0514 05:10:51.621124 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.621324 kubelet[2831]: E0514 05:10:51.621297 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.621324 kubelet[2831]: W0514 05:10:51.621309 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.621324 kubelet[2831]: E0514 05:10:51.621318 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.621484 kubelet[2831]: E0514 05:10:51.621468 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.621484 kubelet[2831]: W0514 05:10:51.621479 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.621532 kubelet[2831]: E0514 05:10:51.621486 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.621658 kubelet[2831]: E0514 05:10:51.621630 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.621658 kubelet[2831]: W0514 05:10:51.621639 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.621658 kubelet[2831]: E0514 05:10:51.621647 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.621873 kubelet[2831]: E0514 05:10:51.621857 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.621873 kubelet[2831]: W0514 05:10:51.621866 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.621873 kubelet[2831]: E0514 05:10:51.621874 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.622045 kubelet[2831]: E0514 05:10:51.622029 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.622045 kubelet[2831]: W0514 05:10:51.622039 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.622098 kubelet[2831]: E0514 05:10:51.622046 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.622213 kubelet[2831]: E0514 05:10:51.622197 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.622213 kubelet[2831]: W0514 05:10:51.622206 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.622213 kubelet[2831]: E0514 05:10:51.622213 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.622399 kubelet[2831]: E0514 05:10:51.622382 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.622399 kubelet[2831]: W0514 05:10:51.622392 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.622399 kubelet[2831]: E0514 05:10:51.622400 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.622559 kubelet[2831]: E0514 05:10:51.622542 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.622559 kubelet[2831]: W0514 05:10:51.622552 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.622559 kubelet[2831]: E0514 05:10:51.622559 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.622758 kubelet[2831]: E0514 05:10:51.622740 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.622758 kubelet[2831]: W0514 05:10:51.622751 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.622758 kubelet[2831]: E0514 05:10:51.622759 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.622916 kubelet[2831]: E0514 05:10:51.622898 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.622916 kubelet[2831]: W0514 05:10:51.622908 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.622916 kubelet[2831]: E0514 05:10:51.622915 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.623087 kubelet[2831]: E0514 05:10:51.623071 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.623087 kubelet[2831]: W0514 05:10:51.623080 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.623138 kubelet[2831]: E0514 05:10:51.623088 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.623247 kubelet[2831]: E0514 05:10:51.623231 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.623247 kubelet[2831]: W0514 05:10:51.623241 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.623247 kubelet[2831]: E0514 05:10:51.623247 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.623420 kubelet[2831]: E0514 05:10:51.623402 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.623420 kubelet[2831]: W0514 05:10:51.623412 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.623420 kubelet[2831]: E0514 05:10:51.623419 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.631675 kubelet[2831]: E0514 05:10:51.631648 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.631675 kubelet[2831]: W0514 05:10:51.631660 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.631675 kubelet[2831]: E0514 05:10:51.631668 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.631895 kubelet[2831]: E0514 05:10:51.631870 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.631895 kubelet[2831]: W0514 05:10:51.631882 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.631895 kubelet[2831]: E0514 05:10:51.631895 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.632083 kubelet[2831]: E0514 05:10:51.632066 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.632083 kubelet[2831]: W0514 05:10:51.632077 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.632138 kubelet[2831]: E0514 05:10:51.632090 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.632346 kubelet[2831]: E0514 05:10:51.632310 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.632346 kubelet[2831]: W0514 05:10:51.632331 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.632413 kubelet[2831]: E0514 05:10:51.632353 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.632559 kubelet[2831]: E0514 05:10:51.632534 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.632559 kubelet[2831]: W0514 05:10:51.632545 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.632559 kubelet[2831]: E0514 05:10:51.632557 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.632742 kubelet[2831]: E0514 05:10:51.632728 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.632742 kubelet[2831]: W0514 05:10:51.632738 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.632806 kubelet[2831]: E0514 05:10:51.632751 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.632933 kubelet[2831]: E0514 05:10:51.632918 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.632933 kubelet[2831]: W0514 05:10:51.632927 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.632980 kubelet[2831]: E0514 05:10:51.632939 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.633223 kubelet[2831]: E0514 05:10:51.633201 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.633277 kubelet[2831]: W0514 05:10:51.633221 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.633277 kubelet[2831]: E0514 05:10:51.633249 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.633455 kubelet[2831]: E0514 05:10:51.633435 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.633455 kubelet[2831]: W0514 05:10:51.633448 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.633502 kubelet[2831]: E0514 05:10:51.633460 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.633627 kubelet[2831]: E0514 05:10:51.633610 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.633627 kubelet[2831]: W0514 05:10:51.633620 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.633681 kubelet[2831]: E0514 05:10:51.633632 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.633806 kubelet[2831]: E0514 05:10:51.633791 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.633806 kubelet[2831]: W0514 05:10:51.633801 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.633863 kubelet[2831]: E0514 05:10:51.633814 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.634067 kubelet[2831]: E0514 05:10:51.634037 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.634067 kubelet[2831]: W0514 05:10:51.634057 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.634230 kubelet[2831]: E0514 05:10:51.634086 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.634332 kubelet[2831]: E0514 05:10:51.634316 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.634332 kubelet[2831]: W0514 05:10:51.634331 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.634387 kubelet[2831]: E0514 05:10:51.634346 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.634512 kubelet[2831]: E0514 05:10:51.634497 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.634512 kubelet[2831]: W0514 05:10:51.634507 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.634569 kubelet[2831]: E0514 05:10:51.634521 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.634746 kubelet[2831]: E0514 05:10:51.634721 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.634746 kubelet[2831]: W0514 05:10:51.634732 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.634746 kubelet[2831]: E0514 05:10:51.634744 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.634939 kubelet[2831]: E0514 05:10:51.634923 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.634939 kubelet[2831]: W0514 05:10:51.634936 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.634995 kubelet[2831]: E0514 05:10:51.634945 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.635224 kubelet[2831]: E0514 05:10:51.635199 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.635224 kubelet[2831]: W0514 05:10:51.635212 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.635286 kubelet[2831]: E0514 05:10:51.635225 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.635420 kubelet[2831]: E0514 05:10:51.635405 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:51.635420 kubelet[2831]: W0514 05:10:51.635415 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:51.635534 kubelet[2831]: E0514 05:10:51.635422 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:51.912822 kubelet[2831]: I0514 05:10:51.912533 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bc5459cbd-krcvp" podStartSLOduration=2.435585706 podStartE2EDuration="4.912513251s" podCreationTimestamp="2025-05-14 05:10:47 +0000 UTC" firstStartedPulling="2025-05-14 05:10:48.831381542 +0000 UTC m=+23.449303735" lastFinishedPulling="2025-05-14 05:10:51.308309087 +0000 UTC m=+25.926231280" observedRunningTime="2025-05-14 05:10:51.911889994 +0000 UTC m=+26.529812187" watchObservedRunningTime="2025-05-14 05:10:51.912513251 +0000 UTC m=+26.530435444" May 14 05:10:52.532096 kubelet[2831]: I0514 05:10:52.532057 2831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 05:10:52.630332 kubelet[2831]: E0514 05:10:52.630302 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.630332 kubelet[2831]: W0514 05:10:52.630318 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.630332 kubelet[2831]: E0514 05:10:52.630335 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.630515 kubelet[2831]: E0514 05:10:52.630501 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.630515 kubelet[2831]: W0514 05:10:52.630510 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.630563 kubelet[2831]: E0514 05:10:52.630517 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.630749 kubelet[2831]: E0514 05:10:52.630726 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.630749 kubelet[2831]: W0514 05:10:52.630736 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.630749 kubelet[2831]: E0514 05:10:52.630744 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.630912 kubelet[2831]: E0514 05:10:52.630892 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.630912 kubelet[2831]: W0514 05:10:52.630901 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.630912 kubelet[2831]: E0514 05:10:52.630909 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.631080 kubelet[2831]: E0514 05:10:52.631062 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.631080 kubelet[2831]: W0514 05:10:52.631071 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.631080 kubelet[2831]: E0514 05:10:52.631078 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.631229 kubelet[2831]: E0514 05:10:52.631215 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.631229 kubelet[2831]: W0514 05:10:52.631224 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.631278 kubelet[2831]: E0514 05:10:52.631230 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.631392 kubelet[2831]: E0514 05:10:52.631379 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.631392 kubelet[2831]: W0514 05:10:52.631387 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.631439 kubelet[2831]: E0514 05:10:52.631395 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.631550 kubelet[2831]: E0514 05:10:52.631536 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.631550 kubelet[2831]: W0514 05:10:52.631544 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.631600 kubelet[2831]: E0514 05:10:52.631551 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.631703 kubelet[2831]: E0514 05:10:52.631690 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.631741 kubelet[2831]: W0514 05:10:52.631698 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.631741 kubelet[2831]: E0514 05:10:52.631722 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.631870 kubelet[2831]: E0514 05:10:52.631858 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.631870 kubelet[2831]: W0514 05:10:52.631866 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.631925 kubelet[2831]: E0514 05:10:52.631873 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.632019 kubelet[2831]: E0514 05:10:52.632008 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.632019 kubelet[2831]: W0514 05:10:52.632016 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.632065 kubelet[2831]: E0514 05:10:52.632023 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.632174 kubelet[2831]: E0514 05:10:52.632153 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.632174 kubelet[2831]: W0514 05:10:52.632161 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.632174 kubelet[2831]: E0514 05:10:52.632168 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.632335 kubelet[2831]: E0514 05:10:52.632318 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.632335 kubelet[2831]: W0514 05:10:52.632327 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.632335 kubelet[2831]: E0514 05:10:52.632333 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.632482 kubelet[2831]: E0514 05:10:52.632470 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.632482 kubelet[2831]: W0514 05:10:52.632478 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.632538 kubelet[2831]: E0514 05:10:52.632485 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.632653 kubelet[2831]: E0514 05:10:52.632640 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.632653 kubelet[2831]: W0514 05:10:52.632648 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.632698 kubelet[2831]: E0514 05:10:52.632655 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.640110 kubelet[2831]: E0514 05:10:52.640078 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.640110 kubelet[2831]: W0514 05:10:52.640101 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.640175 kubelet[2831]: E0514 05:10:52.640121 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.640340 kubelet[2831]: E0514 05:10:52.640319 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.640340 kubelet[2831]: W0514 05:10:52.640330 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.640392 kubelet[2831]: E0514 05:10:52.640343 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.640526 kubelet[2831]: E0514 05:10:52.640511 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.640526 kubelet[2831]: W0514 05:10:52.640521 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.640572 kubelet[2831]: E0514 05:10:52.640534 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.640768 kubelet[2831]: E0514 05:10:52.640751 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.640768 kubelet[2831]: W0514 05:10:52.640766 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.640828 kubelet[2831]: E0514 05:10:52.640782 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.641058 kubelet[2831]: E0514 05:10:52.641031 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.641090 kubelet[2831]: W0514 05:10:52.641055 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.641090 kubelet[2831]: E0514 05:10:52.641084 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.641292 kubelet[2831]: E0514 05:10:52.641279 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.641292 kubelet[2831]: W0514 05:10:52.641290 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.641343 kubelet[2831]: E0514 05:10:52.641305 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.641459 kubelet[2831]: E0514 05:10:52.641447 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.641459 kubelet[2831]: W0514 05:10:52.641456 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.641505 kubelet[2831]: E0514 05:10:52.641468 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.641649 kubelet[2831]: E0514 05:10:52.641638 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.641649 kubelet[2831]: W0514 05:10:52.641646 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.641699 kubelet[2831]: E0514 05:10:52.641658 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.641899 kubelet[2831]: E0514 05:10:52.641882 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.641899 kubelet[2831]: W0514 05:10:52.641896 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.641966 kubelet[2831]: E0514 05:10:52.641910 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.642127 kubelet[2831]: E0514 05:10:52.642105 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.642127 kubelet[2831]: W0514 05:10:52.642124 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.642178 kubelet[2831]: E0514 05:10:52.642141 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.642307 kubelet[2831]: E0514 05:10:52.642292 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.642307 kubelet[2831]: W0514 05:10:52.642302 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.642351 kubelet[2831]: E0514 05:10:52.642316 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.642540 kubelet[2831]: E0514 05:10:52.642525 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.642540 kubelet[2831]: W0514 05:10:52.642535 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.642592 kubelet[2831]: E0514 05:10:52.642548 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.642823 kubelet[2831]: E0514 05:10:52.642809 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.642823 kubelet[2831]: W0514 05:10:52.642820 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.642875 kubelet[2831]: E0514 05:10:52.642834 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.643004 kubelet[2831]: E0514 05:10:52.642992 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.643004 kubelet[2831]: W0514 05:10:52.643000 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.643047 kubelet[2831]: E0514 05:10:52.643012 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.643180 kubelet[2831]: E0514 05:10:52.643168 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.643180 kubelet[2831]: W0514 05:10:52.643177 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.643229 kubelet[2831]: E0514 05:10:52.643216 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.643372 kubelet[2831]: E0514 05:10:52.643344 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.643372 kubelet[2831]: W0514 05:10:52.643356 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.643565 kubelet[2831]: E0514 05:10:52.643391 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.643565 kubelet[2831]: E0514 05:10:52.643531 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.643565 kubelet[2831]: W0514 05:10:52.643539 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.643565 kubelet[2831]: E0514 05:10:52.643553 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:52.643766 kubelet[2831]: E0514 05:10:52.643737 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 05:10:52.643766 kubelet[2831]: W0514 05:10:52.643749 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 05:10:52.643766 kubelet[2831]: E0514 05:10:52.643756 2831 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 05:10:53.461427 kubelet[2831]: E0514 05:10:53.461382 2831 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7w57m" podUID="fcc7a085-0eea-465f-a625-a16939de0db1" May 14 05:10:55.405505 containerd[1598]: time="2025-05-14T05:10:55.405445536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:55.436885 containerd[1598]: time="2025-05-14T05:10:55.436846651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 14 05:10:55.438942 containerd[1598]: time="2025-05-14T05:10:55.438905123Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:55.442253 containerd[1598]: time="2025-05-14T05:10:55.442185438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:10:55.443950 containerd[1598]: time="2025-05-14T05:10:55.443360032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 4.134877628s" May 14 05:10:55.443950 containerd[1598]: time="2025-05-14T05:10:55.443413764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 14 05:10:55.446980 containerd[1598]: time="2025-05-14T05:10:55.446944120Z" level=info msg="CreateContainer within sandbox \"9150f6ea102a29eaf8f2d6b6dda0255f93cbaa9525299117d854a6ee4ce3eaec\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 05:10:55.461619 kubelet[2831]: E0514 05:10:55.461576 2831 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7w57m" podUID="fcc7a085-0eea-465f-a625-a16939de0db1" May 14 05:10:55.464466 containerd[1598]: time="2025-05-14T05:10:55.464417736Z" level=info msg="Container 552c6216f21ed6d845aeea884f8341fcda6914c0bc103cb286b7a0c49aafda71: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:55.475394 containerd[1598]: time="2025-05-14T05:10:55.475353876Z" level=info msg="CreateContainer within sandbox \"9150f6ea102a29eaf8f2d6b6dda0255f93cbaa9525299117d854a6ee4ce3eaec\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"552c6216f21ed6d845aeea884f8341fcda6914c0bc103cb286b7a0c49aafda71\"" May 14 05:10:55.475971 containerd[1598]: time="2025-05-14T05:10:55.475839923Z" level=info msg="StartContainer for \"552c6216f21ed6d845aeea884f8341fcda6914c0bc103cb286b7a0c49aafda71\"" May 14 05:10:55.477537 containerd[1598]: time="2025-05-14T05:10:55.477505002Z" level=info msg="connecting to shim 552c6216f21ed6d845aeea884f8341fcda6914c0bc103cb286b7a0c49aafda71" address="unix:///run/containerd/s/ea78cd50e093efa2314659b57e043fe016a4b84426296e3e7faceb1f481446a5" protocol=ttrpc version=3 May 14 05:10:55.497856 systemd[1]: Started cri-containerd-552c6216f21ed6d845aeea884f8341fcda6914c0bc103cb286b7a0c49aafda71.scope - libcontainer container 552c6216f21ed6d845aeea884f8341fcda6914c0bc103cb286b7a0c49aafda71. May 14 05:10:55.545894 systemd[1]: Started sshd@7-10.0.0.84:22-10.0.0.1:59878.service - OpenSSH per-connection server daemon (10.0.0.1:59878). May 14 05:10:55.576396 systemd[1]: cri-containerd-552c6216f21ed6d845aeea884f8341fcda6914c0bc103cb286b7a0c49aafda71.scope: Deactivated successfully. May 14 05:10:55.579581 containerd[1598]: time="2025-05-14T05:10:55.579540186Z" level=info msg="TaskExit event in podsandbox handler container_id:\"552c6216f21ed6d845aeea884f8341fcda6914c0bc103cb286b7a0c49aafda71\" id:\"552c6216f21ed6d845aeea884f8341fcda6914c0bc103cb286b7a0c49aafda71\" pid:3560 exited_at:{seconds:1747199455 nanos:578838483}" May 14 05:10:55.840216 sshd[3576]: Accepted publickey for core from 10.0.0.1 port 59878 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:10:55.841673 sshd-session[3576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:55.842487 containerd[1598]: time="2025-05-14T05:10:55.842447299Z" level=info msg="received exit event container_id:\"552c6216f21ed6d845aeea884f8341fcda6914c0bc103cb286b7a0c49aafda71\" id:\"552c6216f21ed6d845aeea884f8341fcda6914c0bc103cb286b7a0c49aafda71\" pid:3560 exited_at:{seconds:1747199455 nanos:578838483}" May 14 05:10:55.846355 containerd[1598]: time="2025-05-14T05:10:55.846301115Z" level=info msg="StartContainer for \"552c6216f21ed6d845aeea884f8341fcda6914c0bc103cb286b7a0c49aafda71\" returns successfully" May 14 05:10:55.849542 systemd-logind[1573]: New session 8 of user core. May 14 05:10:55.856467 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 05:10:55.873315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-552c6216f21ed6d845aeea884f8341fcda6914c0bc103cb286b7a0c49aafda71-rootfs.mount: Deactivated successfully. May 14 05:10:56.068630 sshd[3589]: Connection closed by 10.0.0.1 port 59878 May 14 05:10:56.068799 sshd-session[3576]: pam_unix(sshd:session): session closed for user core May 14 05:10:56.073934 systemd[1]: sshd@7-10.0.0.84:22-10.0.0.1:59878.service: Deactivated successfully. May 14 05:10:56.076060 systemd[1]: session-8.scope: Deactivated successfully. May 14 05:10:56.077065 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. May 14 05:10:56.078334 systemd-logind[1573]: Removed session 8. May 14 05:10:56.854597 containerd[1598]: time="2025-05-14T05:10:56.854547422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 05:10:57.460645 kubelet[2831]: E0514 05:10:57.460584 2831 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7w57m" podUID="fcc7a085-0eea-465f-a625-a16939de0db1" May 14 05:10:59.460847 kubelet[2831]: E0514 05:10:59.460791 2831 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7w57m" podUID="fcc7a085-0eea-465f-a625-a16939de0db1" May 14 05:11:01.088779 systemd[1]: Started sshd@8-10.0.0.84:22-10.0.0.1:59672.service - OpenSSH per-connection server daemon (10.0.0.1:59672). May 14 05:11:01.140293 sshd[3625]: Accepted publickey for core from 10.0.0.1 port 59672 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:01.142282 sshd-session[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:01.147303 systemd-logind[1573]: New session 9 of user core. May 14 05:11:01.152840 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 05:11:01.283751 sshd[3627]: Connection closed by 10.0.0.1 port 59672 May 14 05:11:01.283750 sshd-session[3625]: pam_unix(sshd:session): session closed for user core May 14 05:11:01.288338 systemd[1]: sshd@8-10.0.0.84:22-10.0.0.1:59672.service: Deactivated successfully. May 14 05:11:01.290896 systemd[1]: session-9.scope: Deactivated successfully. May 14 05:11:01.292475 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. May 14 05:11:01.293827 systemd-logind[1573]: Removed session 9. May 14 05:11:01.461265 kubelet[2831]: E0514 05:11:01.461206 2831 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7w57m" podUID="fcc7a085-0eea-465f-a625-a16939de0db1" May 14 05:11:01.964851 kubelet[2831]: I0514 05:11:01.964802 2831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 05:11:02.091899 containerd[1598]: time="2025-05-14T05:11:02.091845362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:02.093699 containerd[1598]: time="2025-05-14T05:11:02.093640079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 14 05:11:02.095393 containerd[1598]: time="2025-05-14T05:11:02.095354044Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:02.097696 containerd[1598]: time="2025-05-14T05:11:02.097663321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:02.098668 containerd[1598]: time="2025-05-14T05:11:02.098611255Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.244015623s" May 14 05:11:02.098729 containerd[1598]: time="2025-05-14T05:11:02.098669233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 14 05:11:02.101044 containerd[1598]: time="2025-05-14T05:11:02.101003186Z" level=info msg="CreateContainer within sandbox \"9150f6ea102a29eaf8f2d6b6dda0255f93cbaa9525299117d854a6ee4ce3eaec\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 05:11:02.110781 containerd[1598]: time="2025-05-14T05:11:02.110730792Z" level=info msg="Container 8a8a821d83df7a094fc55e0efdb222fb53fb69e835fcdd614f8c12c7ce69a54f: CDI devices from CRI Config.CDIDevices: []" May 14 05:11:02.122654 containerd[1598]: time="2025-05-14T05:11:02.122600140Z" level=info msg="CreateContainer within sandbox \"9150f6ea102a29eaf8f2d6b6dda0255f93cbaa9525299117d854a6ee4ce3eaec\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8a8a821d83df7a094fc55e0efdb222fb53fb69e835fcdd614f8c12c7ce69a54f\"" May 14 05:11:02.123109 containerd[1598]: time="2025-05-14T05:11:02.123074693Z" level=info msg="StartContainer for \"8a8a821d83df7a094fc55e0efdb222fb53fb69e835fcdd614f8c12c7ce69a54f\"" May 14 05:11:02.124379 containerd[1598]: time="2025-05-14T05:11:02.124350283Z" level=info msg="connecting to shim 8a8a821d83df7a094fc55e0efdb222fb53fb69e835fcdd614f8c12c7ce69a54f" address="unix:///run/containerd/s/ea78cd50e093efa2314659b57e043fe016a4b84426296e3e7faceb1f481446a5" protocol=ttrpc version=3 May 14 05:11:02.145834 systemd[1]: Started cri-containerd-8a8a821d83df7a094fc55e0efdb222fb53fb69e835fcdd614f8c12c7ce69a54f.scope - libcontainer container 8a8a821d83df7a094fc55e0efdb222fb53fb69e835fcdd614f8c12c7ce69a54f. May 14 05:11:02.455986 containerd[1598]: time="2025-05-14T05:11:02.455944458Z" level=info msg="StartContainer for \"8a8a821d83df7a094fc55e0efdb222fb53fb69e835fcdd614f8c12c7ce69a54f\" returns successfully" May 14 05:11:03.375259 containerd[1598]: time="2025-05-14T05:11:03.375216061Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 05:11:03.377782 systemd[1]: cri-containerd-8a8a821d83df7a094fc55e0efdb222fb53fb69e835fcdd614f8c12c7ce69a54f.scope: Deactivated successfully. May 14 05:11:03.378137 systemd[1]: cri-containerd-8a8a821d83df7a094fc55e0efdb222fb53fb69e835fcdd614f8c12c7ce69a54f.scope: Consumed 492ms CPU time, 158.1M memory peak, 4K read from disk, 154M written to disk. May 14 05:11:03.379288 containerd[1598]: time="2025-05-14T05:11:03.379235433Z" level=info msg="received exit event container_id:\"8a8a821d83df7a094fc55e0efdb222fb53fb69e835fcdd614f8c12c7ce69a54f\" id:\"8a8a821d83df7a094fc55e0efdb222fb53fb69e835fcdd614f8c12c7ce69a54f\" pid:3659 exited_at:{seconds:1747199463 nanos:378954796}" May 14 05:11:03.379288 containerd[1598]: time="2025-05-14T05:11:03.379267955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a8a821d83df7a094fc55e0efdb222fb53fb69e835fcdd614f8c12c7ce69a54f\" id:\"8a8a821d83df7a094fc55e0efdb222fb53fb69e835fcdd614f8c12c7ce69a54f\" pid:3659 exited_at:{seconds:1747199463 nanos:378954796}" May 14 05:11:03.399327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a8a821d83df7a094fc55e0efdb222fb53fb69e835fcdd614f8c12c7ce69a54f-rootfs.mount: Deactivated successfully. May 14 05:11:03.468117 systemd[1]: Created slice kubepods-besteffort-podfcc7a085_0eea_465f_a625_a16939de0db1.slice - libcontainer container kubepods-besteffort-podfcc7a085_0eea_465f_a625_a16939de0db1.slice. May 14 05:11:03.476271 containerd[1598]: time="2025-05-14T05:11:03.476004047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7w57m,Uid:fcc7a085-0eea-465f-a625-a16939de0db1,Namespace:calico-system,Attempt:0,}" May 14 05:11:03.476448 kubelet[2831]: I0514 05:11:03.476263 2831 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 05:11:03.549356 kubelet[2831]: I0514 05:11:03.548837 2831 topology_manager.go:215] "Topology Admit Handler" podUID="1d5291ce-116c-4328-abd5-d097a4637b78" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fmcx7" May 14 05:11:03.552173 kubelet[2831]: I0514 05:11:03.552081 2831 topology_manager.go:215] "Topology Admit Handler" podUID="518875d2-b6d3-47f4-be27-8df3a2cdbf54" podNamespace="calico-apiserver" podName="calico-apiserver-845d69865c-7txs2" May 14 05:11:03.553853 kubelet[2831]: I0514 05:11:03.553817 2831 topology_manager.go:215] "Topology Admit Handler" podUID="c417c481-47ef-43ad-b365-2e74a4c60b8f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xz28m" May 14 05:11:03.554583 kubelet[2831]: I0514 05:11:03.554564 2831 topology_manager.go:215] "Topology Admit Handler" podUID="36ac7d47-9f28-487c-9914-f18cfeea4ed5" podNamespace="calico-system" podName="calico-kube-controllers-79ccf86b7c-hxw7c" May 14 05:11:03.555442 kubelet[2831]: I0514 05:11:03.555425 2831 topology_manager.go:215] "Topology Admit Handler" podUID="31e9255f-87a3-4bf3-a184-3d845602f03d" podNamespace="calico-apiserver" podName="calico-apiserver-845d69865c-92j47" May 14 05:11:03.556520 kubelet[2831]: I0514 05:11:03.556501 2831 topology_manager.go:215] "Topology Admit Handler" podUID="8e850082-69fe-43b6-bdc2-a1e6e4c944d0" podNamespace="calico-apiserver" podName="calico-apiserver-6f69d9dff-p9fm4" May 14 05:11:03.563616 systemd[1]: Created slice kubepods-burstable-pod1d5291ce_116c_4328_abd5_d097a4637b78.slice - libcontainer container kubepods-burstable-pod1d5291ce_116c_4328_abd5_d097a4637b78.slice. May 14 05:11:03.573328 systemd[1]: Created slice kubepods-besteffort-pod518875d2_b6d3_47f4_be27_8df3a2cdbf54.slice - libcontainer container kubepods-besteffort-pod518875d2_b6d3_47f4_be27_8df3a2cdbf54.slice. May 14 05:11:03.578645 systemd[1]: Created slice kubepods-burstable-podc417c481_47ef_43ad_b365_2e74a4c60b8f.slice - libcontainer container kubepods-burstable-podc417c481_47ef_43ad_b365_2e74a4c60b8f.slice. May 14 05:11:03.585893 systemd[1]: Created slice kubepods-besteffort-pod36ac7d47_9f28_487c_9914_f18cfeea4ed5.slice - libcontainer container kubepods-besteffort-pod36ac7d47_9f28_487c_9914_f18cfeea4ed5.slice. May 14 05:11:03.591985 systemd[1]: Created slice kubepods-besteffort-pod8e850082_69fe_43b6_bdc2_a1e6e4c944d0.slice - libcontainer container kubepods-besteffort-pod8e850082_69fe_43b6_bdc2_a1e6e4c944d0.slice. May 14 05:11:03.598540 systemd[1]: Created slice kubepods-besteffort-pod31e9255f_87a3_4bf3_a184_3d845602f03d.slice - libcontainer container kubepods-besteffort-pod31e9255f_87a3_4bf3_a184_3d845602f03d.slice. May 14 05:11:03.614143 kubelet[2831]: I0514 05:11:03.614113 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d5291ce-116c-4328-abd5-d097a4637b78-config-volume\") pod \"coredns-7db6d8ff4d-fmcx7\" (UID: \"1d5291ce-116c-4328-abd5-d097a4637b78\") " pod="kube-system/coredns-7db6d8ff4d-fmcx7" May 14 05:11:03.614426 kubelet[2831]: I0514 05:11:03.614294 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4bzx\" (UniqueName: \"kubernetes.io/projected/1d5291ce-116c-4328-abd5-d097a4637b78-kube-api-access-c4bzx\") pod \"coredns-7db6d8ff4d-fmcx7\" (UID: \"1d5291ce-116c-4328-abd5-d097a4637b78\") " pod="kube-system/coredns-7db6d8ff4d-fmcx7" May 14 05:11:03.614426 kubelet[2831]: I0514 05:11:03.614351 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks2d8\" (UniqueName: \"kubernetes.io/projected/518875d2-b6d3-47f4-be27-8df3a2cdbf54-kube-api-access-ks2d8\") pod \"calico-apiserver-845d69865c-7txs2\" (UID: \"518875d2-b6d3-47f4-be27-8df3a2cdbf54\") " pod="calico-apiserver/calico-apiserver-845d69865c-7txs2" May 14 05:11:03.614426 kubelet[2831]: I0514 05:11:03.614377 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dngdg\" (UniqueName: \"kubernetes.io/projected/36ac7d47-9f28-487c-9914-f18cfeea4ed5-kube-api-access-dngdg\") pod \"calico-kube-controllers-79ccf86b7c-hxw7c\" (UID: \"36ac7d47-9f28-487c-9914-f18cfeea4ed5\") " pod="calico-system/calico-kube-controllers-79ccf86b7c-hxw7c" May 14 05:11:03.614426 kubelet[2831]: I0514 05:11:03.614397 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31e9255f-87a3-4bf3-a184-3d845602f03d-calico-apiserver-certs\") pod \"calico-apiserver-845d69865c-92j47\" (UID: \"31e9255f-87a3-4bf3-a184-3d845602f03d\") " pod="calico-apiserver/calico-apiserver-845d69865c-92j47" May 14 05:11:03.614619 kubelet[2831]: I0514 05:11:03.614411 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e850082-69fe-43b6-bdc2-a1e6e4c944d0-calico-apiserver-certs\") pod \"calico-apiserver-6f69d9dff-p9fm4\" (UID: \"8e850082-69fe-43b6-bdc2-a1e6e4c944d0\") " pod="calico-apiserver/calico-apiserver-6f69d9dff-p9fm4" May 14 05:11:03.614619 kubelet[2831]: I0514 05:11:03.614581 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8lbx\" (UniqueName: \"kubernetes.io/projected/c417c481-47ef-43ad-b365-2e74a4c60b8f-kube-api-access-d8lbx\") pod \"coredns-7db6d8ff4d-xz28m\" (UID: \"c417c481-47ef-43ad-b365-2e74a4c60b8f\") " pod="kube-system/coredns-7db6d8ff4d-xz28m" May 14 05:11:03.614619 kubelet[2831]: I0514 05:11:03.614596 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c417c481-47ef-43ad-b365-2e74a4c60b8f-config-volume\") pod \"coredns-7db6d8ff4d-xz28m\" (UID: \"c417c481-47ef-43ad-b365-2e74a4c60b8f\") " pod="kube-system/coredns-7db6d8ff4d-xz28m" May 14 05:11:03.614802 kubelet[2831]: I0514 05:11:03.614738 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36ac7d47-9f28-487c-9914-f18cfeea4ed5-tigera-ca-bundle\") pod \"calico-kube-controllers-79ccf86b7c-hxw7c\" (UID: \"36ac7d47-9f28-487c-9914-f18cfeea4ed5\") " pod="calico-system/calico-kube-controllers-79ccf86b7c-hxw7c" May 14 05:11:03.614802 kubelet[2831]: I0514 05:11:03.614764 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52wsf\" (UniqueName: \"kubernetes.io/projected/8e850082-69fe-43b6-bdc2-a1e6e4c944d0-kube-api-access-52wsf\") pod \"calico-apiserver-6f69d9dff-p9fm4\" (UID: \"8e850082-69fe-43b6-bdc2-a1e6e4c944d0\") " pod="calico-apiserver/calico-apiserver-6f69d9dff-p9fm4" May 14 05:11:03.614802 kubelet[2831]: I0514 05:11:03.614778 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/518875d2-b6d3-47f4-be27-8df3a2cdbf54-calico-apiserver-certs\") pod \"calico-apiserver-845d69865c-7txs2\" (UID: \"518875d2-b6d3-47f4-be27-8df3a2cdbf54\") " pod="calico-apiserver/calico-apiserver-845d69865c-7txs2" May 14 05:11:03.614933 kubelet[2831]: I0514 05:11:03.614922 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxnv8\" (UniqueName: \"kubernetes.io/projected/31e9255f-87a3-4bf3-a184-3d845602f03d-kube-api-access-vxnv8\") pod \"calico-apiserver-845d69865c-92j47\" (UID: \"31e9255f-87a3-4bf3-a184-3d845602f03d\") " pod="calico-apiserver/calico-apiserver-845d69865c-92j47" May 14 05:11:03.628432 containerd[1598]: time="2025-05-14T05:11:03.628336065Z" level=error msg="Failed to destroy network for sandbox \"2017aca32797f20ce30108f7bdf1cbe287185bbf6d854ece8f7d0d8cde50bda6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:03.630813 systemd[1]: run-netns-cni\x2dd810730f\x2d6edf\x2dfcd8\x2da8d3\x2d22fd4ccc7323.mount: Deactivated successfully. May 14 05:11:03.631474 containerd[1598]: time="2025-05-14T05:11:03.631330799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7w57m,Uid:fcc7a085-0eea-465f-a625-a16939de0db1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2017aca32797f20ce30108f7bdf1cbe287185bbf6d854ece8f7d0d8cde50bda6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:03.631608 kubelet[2831]: E0514 05:11:03.631552 2831 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2017aca32797f20ce30108f7bdf1cbe287185bbf6d854ece8f7d0d8cde50bda6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:03.631656 kubelet[2831]: E0514 05:11:03.631617 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2017aca32797f20ce30108f7bdf1cbe287185bbf6d854ece8f7d0d8cde50bda6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7w57m" May 14 05:11:03.631656 kubelet[2831]: E0514 05:11:03.631636 2831 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2017aca32797f20ce30108f7bdf1cbe287185bbf6d854ece8f7d0d8cde50bda6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7w57m" May 14 05:11:03.631764 kubelet[2831]: E0514 05:11:03.631693 2831 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7w57m_calico-system(fcc7a085-0eea-465f-a625-a16939de0db1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7w57m_calico-system(fcc7a085-0eea-465f-a625-a16939de0db1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2017aca32797f20ce30108f7bdf1cbe287185bbf6d854ece8f7d0d8cde50bda6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7w57m" podUID="fcc7a085-0eea-465f-a625-a16939de0db1" May 14 05:11:03.870202 containerd[1598]: time="2025-05-14T05:11:03.870146379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fmcx7,Uid:1d5291ce-116c-4328-abd5-d097a4637b78,Namespace:kube-system,Attempt:0,}" May 14 05:11:03.872469 containerd[1598]: time="2025-05-14T05:11:03.872421850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 05:11:03.881610 containerd[1598]: time="2025-05-14T05:11:03.881454696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845d69865c-7txs2,Uid:518875d2-b6d3-47f4-be27-8df3a2cdbf54,Namespace:calico-apiserver,Attempt:0,}" May 14 05:11:03.881808 containerd[1598]: time="2025-05-14T05:11:03.881788665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xz28m,Uid:c417c481-47ef-43ad-b365-2e74a4c60b8f,Namespace:kube-system,Attempt:0,}" May 14 05:11:03.890759 containerd[1598]: time="2025-05-14T05:11:03.890713467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79ccf86b7c-hxw7c,Uid:36ac7d47-9f28-487c-9914-f18cfeea4ed5,Namespace:calico-system,Attempt:0,}" May 14 05:11:03.896968 containerd[1598]: time="2025-05-14T05:11:03.896557392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f69d9dff-p9fm4,Uid:8e850082-69fe-43b6-bdc2-a1e6e4c944d0,Namespace:calico-apiserver,Attempt:0,}" May 14 05:11:03.903051 containerd[1598]: time="2025-05-14T05:11:03.903005155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845d69865c-92j47,Uid:31e9255f-87a3-4bf3-a184-3d845602f03d,Namespace:calico-apiserver,Attempt:0,}" May 14 05:11:03.937627 containerd[1598]: time="2025-05-14T05:11:03.937565791Z" level=error msg="Failed to destroy network for sandbox \"f33d39bd779ad800e8ade614d65adc2d58bc97ccdcf9f28fd67add6fad43e5f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:03.941840 containerd[1598]: time="2025-05-14T05:11:03.941774901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fmcx7,Uid:1d5291ce-116c-4328-abd5-d097a4637b78,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f33d39bd779ad800e8ade614d65adc2d58bc97ccdcf9f28fd67add6fad43e5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:03.942064 kubelet[2831]: E0514 05:11:03.942034 2831 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f33d39bd779ad800e8ade614d65adc2d58bc97ccdcf9f28fd67add6fad43e5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:03.942128 kubelet[2831]: E0514 05:11:03.942099 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f33d39bd779ad800e8ade614d65adc2d58bc97ccdcf9f28fd67add6fad43e5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fmcx7" May 14 05:11:03.942160 kubelet[2831]: E0514 05:11:03.942133 2831 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f33d39bd779ad800e8ade614d65adc2d58bc97ccdcf9f28fd67add6fad43e5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fmcx7" May 14 05:11:03.942212 kubelet[2831]: E0514 05:11:03.942178 2831 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-fmcx7_kube-system(1d5291ce-116c-4328-abd5-d097a4637b78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-fmcx7_kube-system(1d5291ce-116c-4328-abd5-d097a4637b78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f33d39bd779ad800e8ade614d65adc2d58bc97ccdcf9f28fd67add6fad43e5f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fmcx7" podUID="1d5291ce-116c-4328-abd5-d097a4637b78" May 14 05:11:03.992234 containerd[1598]: time="2025-05-14T05:11:03.992192614Z" level=error msg="Failed to destroy network for sandbox \"2ac033d27dfb89e869f70a6c4313ca571a0562052156dcc0c61a19c3bf9f9b4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:03.994543 containerd[1598]: time="2025-05-14T05:11:03.994496658Z" level=error msg="Failed to destroy network for sandbox \"5e57eb2306f1cd4c38c4b49ed0f13536484213aa9e5e037c010987fe87431e01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:03.995446 containerd[1598]: time="2025-05-14T05:11:03.995414616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xz28m,Uid:c417c481-47ef-43ad-b365-2e74a4c60b8f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac033d27dfb89e869f70a6c4313ca571a0562052156dcc0c61a19c3bf9f9b4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:03.996001 kubelet[2831]: E0514 05:11:03.995953 2831 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac033d27dfb89e869f70a6c4313ca571a0562052156dcc0c61a19c3bf9f9b4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:03.996096 kubelet[2831]: E0514 05:11:03.996017 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac033d27dfb89e869f70a6c4313ca571a0562052156dcc0c61a19c3bf9f9b4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xz28m" May 14 05:11:03.996096 kubelet[2831]: E0514 05:11:03.996045 2831 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac033d27dfb89e869f70a6c4313ca571a0562052156dcc0c61a19c3bf9f9b4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xz28m" May 14 05:11:03.996096 kubelet[2831]: E0514 05:11:03.996081 2831 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xz28m_kube-system(c417c481-47ef-43ad-b365-2e74a4c60b8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xz28m_kube-system(c417c481-47ef-43ad-b365-2e74a4c60b8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ac033d27dfb89e869f70a6c4313ca571a0562052156dcc0c61a19c3bf9f9b4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xz28m" podUID="c417c481-47ef-43ad-b365-2e74a4c60b8f" May 14 05:11:03.996734 containerd[1598]: time="2025-05-14T05:11:03.996694774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79ccf86b7c-hxw7c,Uid:36ac7d47-9f28-487c-9914-f18cfeea4ed5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e57eb2306f1cd4c38c4b49ed0f13536484213aa9e5e037c010987fe87431e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:03.997031 kubelet[2831]: E0514 05:11:03.996986 2831 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e57eb2306f1cd4c38c4b49ed0f13536484213aa9e5e037c010987fe87431e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:03.997250 kubelet[2831]: E0514 05:11:03.997118 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e57eb2306f1cd4c38c4b49ed0f13536484213aa9e5e037c010987fe87431e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79ccf86b7c-hxw7c" May 14 05:11:03.997250 kubelet[2831]: E0514 05:11:03.997143 2831 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e57eb2306f1cd4c38c4b49ed0f13536484213aa9e5e037c010987fe87431e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79ccf86b7c-hxw7c" May 14 05:11:03.997250 kubelet[2831]: E0514 05:11:03.997202 2831 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79ccf86b7c-hxw7c_calico-system(36ac7d47-9f28-487c-9914-f18cfeea4ed5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79ccf86b7c-hxw7c_calico-system(36ac7d47-9f28-487c-9914-f18cfeea4ed5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e57eb2306f1cd4c38c4b49ed0f13536484213aa9e5e037c010987fe87431e01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79ccf86b7c-hxw7c" podUID="36ac7d47-9f28-487c-9914-f18cfeea4ed5" May 14 05:11:03.998698 containerd[1598]: time="2025-05-14T05:11:03.998666084Z" level=error msg="Failed to destroy network for sandbox \"54789c1c2b9507560e0b5c0a0e2b30473d59f6980fe480345c9ac5d8a046a496\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:04.000187 containerd[1598]: time="2025-05-14T05:11:04.000141079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845d69865c-7txs2,Uid:518875d2-b6d3-47f4-be27-8df3a2cdbf54,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54789c1c2b9507560e0b5c0a0e2b30473d59f6980fe480345c9ac5d8a046a496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:04.000360 kubelet[2831]: E0514 05:11:04.000332 2831 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54789c1c2b9507560e0b5c0a0e2b30473d59f6980fe480345c9ac5d8a046a496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:04.000435 kubelet[2831]: E0514 05:11:04.000374 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54789c1c2b9507560e0b5c0a0e2b30473d59f6980fe480345c9ac5d8a046a496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-845d69865c-7txs2" May 14 05:11:04.000435 kubelet[2831]: E0514 05:11:04.000392 2831 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54789c1c2b9507560e0b5c0a0e2b30473d59f6980fe480345c9ac5d8a046a496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-845d69865c-7txs2" May 14 05:11:04.000504 kubelet[2831]: E0514 05:11:04.000435 2831 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-845d69865c-7txs2_calico-apiserver(518875d2-b6d3-47f4-be27-8df3a2cdbf54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-845d69865c-7txs2_calico-apiserver(518875d2-b6d3-47f4-be27-8df3a2cdbf54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54789c1c2b9507560e0b5c0a0e2b30473d59f6980fe480345c9ac5d8a046a496\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-845d69865c-7txs2" podUID="518875d2-b6d3-47f4-be27-8df3a2cdbf54" May 14 05:11:04.007867 containerd[1598]: time="2025-05-14T05:11:04.007815596Z" level=error msg="Failed to destroy network for sandbox \"43c5c2fde25012cfca2606658aaa48d6369689b4a8bf508f5c5c7a450914a00d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:04.009278 containerd[1598]: time="2025-05-14T05:11:04.009228594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845d69865c-92j47,Uid:31e9255f-87a3-4bf3-a184-3d845602f03d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43c5c2fde25012cfca2606658aaa48d6369689b4a8bf508f5c5c7a450914a00d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:04.009475 kubelet[2831]: E0514 05:11:04.009436 2831 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43c5c2fde25012cfca2606658aaa48d6369689b4a8bf508f5c5c7a450914a00d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:04.009528 kubelet[2831]: E0514 05:11:04.009496 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43c5c2fde25012cfca2606658aaa48d6369689b4a8bf508f5c5c7a450914a00d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-845d69865c-92j47" May 14 05:11:04.009528 kubelet[2831]: E0514 05:11:04.009522 2831 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43c5c2fde25012cfca2606658aaa48d6369689b4a8bf508f5c5c7a450914a00d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-845d69865c-92j47" May 14 05:11:04.009615 kubelet[2831]: E0514 05:11:04.009571 2831 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-845d69865c-92j47_calico-apiserver(31e9255f-87a3-4bf3-a184-3d845602f03d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-845d69865c-92j47_calico-apiserver(31e9255f-87a3-4bf3-a184-3d845602f03d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43c5c2fde25012cfca2606658aaa48d6369689b4a8bf508f5c5c7a450914a00d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-845d69865c-92j47" podUID="31e9255f-87a3-4bf3-a184-3d845602f03d" May 14 05:11:04.010000 containerd[1598]: time="2025-05-14T05:11:04.009964579Z" level=error msg="Failed to destroy network for sandbox \"3b069c2a9cedbddd37777db192c11c94042477334a49274d170ea3975f68f839\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:04.011371 containerd[1598]: time="2025-05-14T05:11:04.011327592Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f69d9dff-p9fm4,Uid:8e850082-69fe-43b6-bdc2-a1e6e4c944d0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b069c2a9cedbddd37777db192c11c94042477334a49274d170ea3975f68f839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:04.011547 kubelet[2831]: E0514 05:11:04.011514 2831 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b069c2a9cedbddd37777db192c11c94042477334a49274d170ea3975f68f839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 05:11:04.011597 kubelet[2831]: E0514 05:11:04.011560 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b069c2a9cedbddd37777db192c11c94042477334a49274d170ea3975f68f839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f69d9dff-p9fm4" May 14 05:11:04.011597 kubelet[2831]: E0514 05:11:04.011583 2831 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b069c2a9cedbddd37777db192c11c94042477334a49274d170ea3975f68f839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f69d9dff-p9fm4" May 14 05:11:04.011664 kubelet[2831]: E0514 05:11:04.011620 2831 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f69d9dff-p9fm4_calico-apiserver(8e850082-69fe-43b6-bdc2-a1e6e4c944d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f69d9dff-p9fm4_calico-apiserver(8e850082-69fe-43b6-bdc2-a1e6e4c944d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b069c2a9cedbddd37777db192c11c94042477334a49274d170ea3975f68f839\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f69d9dff-p9fm4" podUID="8e850082-69fe-43b6-bdc2-a1e6e4c944d0" May 14 05:11:06.298003 systemd[1]: Started sshd@9-10.0.0.84:22-10.0.0.1:59678.service - OpenSSH per-connection server daemon (10.0.0.1:59678). May 14 05:11:06.364084 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 59678 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:06.366002 sshd-session[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:06.370311 systemd-logind[1573]: New session 10 of user core. May 14 05:11:06.374827 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 05:11:06.493546 sshd[3970]: Connection closed by 10.0.0.1 port 59678 May 14 05:11:06.493866 sshd-session[3968]: pam_unix(sshd:session): session closed for user core May 14 05:11:06.498618 systemd[1]: sshd@9-10.0.0.84:22-10.0.0.1:59678.service: Deactivated successfully. May 14 05:11:06.500938 systemd[1]: session-10.scope: Deactivated successfully. May 14 05:11:06.502070 systemd-logind[1573]: Session 10 logged out. Waiting for processes to exit. May 14 05:11:06.503391 systemd-logind[1573]: Removed session 10. May 14 05:11:10.809036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635846645.mount: Deactivated successfully. May 14 05:11:11.506567 systemd[1]: Started sshd@10-10.0.0.84:22-10.0.0.1:34760.service - OpenSSH per-connection server daemon (10.0.0.1:34760). May 14 05:11:11.828067 containerd[1598]: time="2025-05-14T05:11:11.827864658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:11.829393 containerd[1598]: time="2025-05-14T05:11:11.829268275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 14 05:11:11.831270 containerd[1598]: time="2025-05-14T05:11:11.831238358Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:11.835930 containerd[1598]: time="2025-05-14T05:11:11.833908665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:11.835930 containerd[1598]: time="2025-05-14T05:11:11.834409767Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.961944776s" May 14 05:11:11.835930 containerd[1598]: time="2025-05-14T05:11:11.834430897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 14 05:11:11.847447 containerd[1598]: time="2025-05-14T05:11:11.845965973Z" level=info msg="CreateContainer within sandbox \"9150f6ea102a29eaf8f2d6b6dda0255f93cbaa9525299117d854a6ee4ce3eaec\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 05:11:11.864779 sshd[3991]: Accepted publickey for core from 10.0.0.1 port 34760 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:11.866303 sshd-session[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:11.870247 containerd[1598]: time="2025-05-14T05:11:11.870213869Z" level=info msg="Container 038e8e912ce25038576cc0267e20c1af3d3b605f82a4a3debd273f138d7f7e47: CDI devices from CRI Config.CDIDevices: []" May 14 05:11:11.870873 systemd-logind[1573]: New session 11 of user core. May 14 05:11:11.882821 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 05:11:11.888093 containerd[1598]: time="2025-05-14T05:11:11.888056310Z" level=info msg="CreateContainer within sandbox \"9150f6ea102a29eaf8f2d6b6dda0255f93cbaa9525299117d854a6ee4ce3eaec\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"038e8e912ce25038576cc0267e20c1af3d3b605f82a4a3debd273f138d7f7e47\"" May 14 05:11:11.892524 containerd[1598]: time="2025-05-14T05:11:11.891822026Z" level=info msg="StartContainer for \"038e8e912ce25038576cc0267e20c1af3d3b605f82a4a3debd273f138d7f7e47\"" May 14 05:11:11.893455 containerd[1598]: time="2025-05-14T05:11:11.893426360Z" level=info msg="connecting to shim 038e8e912ce25038576cc0267e20c1af3d3b605f82a4a3debd273f138d7f7e47" address="unix:///run/containerd/s/ea78cd50e093efa2314659b57e043fe016a4b84426296e3e7faceb1f481446a5" protocol=ttrpc version=3 May 14 05:11:11.914840 systemd[1]: Started cri-containerd-038e8e912ce25038576cc0267e20c1af3d3b605f82a4a3debd273f138d7f7e47.scope - libcontainer container 038e8e912ce25038576cc0267e20c1af3d3b605f82a4a3debd273f138d7f7e47. May 14 05:11:11.967567 containerd[1598]: time="2025-05-14T05:11:11.967515598Z" level=info msg="StartContainer for \"038e8e912ce25038576cc0267e20c1af3d3b605f82a4a3debd273f138d7f7e47\" returns successfully" May 14 05:11:12.014608 sshd[3997]: Connection closed by 10.0.0.1 port 34760 May 14 05:11:12.014979 sshd-session[3991]: pam_unix(sshd:session): session closed for user core May 14 05:11:12.028166 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 05:11:12.028648 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 05:11:12.029669 systemd[1]: sshd@10-10.0.0.84:22-10.0.0.1:34760.service: Deactivated successfully. May 14 05:11:12.032355 systemd[1]: session-11.scope: Deactivated successfully. May 14 05:11:12.033334 systemd-logind[1573]: Session 11 logged out. Waiting for processes to exit. May 14 05:11:12.037974 systemd[1]: Started sshd@11-10.0.0.84:22-10.0.0.1:34764.service - OpenSSH per-connection server daemon (10.0.0.1:34764). May 14 05:11:12.039424 systemd-logind[1573]: Removed session 11. May 14 05:11:12.085264 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 34764 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:12.086875 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:12.091388 systemd-logind[1573]: New session 12 of user core. May 14 05:11:12.094835 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 05:11:12.254848 sshd[4066]: Connection closed by 10.0.0.1 port 34764 May 14 05:11:12.255841 sshd-session[4057]: pam_unix(sshd:session): session closed for user core May 14 05:11:12.265473 systemd[1]: sshd@11-10.0.0.84:22-10.0.0.1:34764.service: Deactivated successfully. May 14 05:11:12.270008 systemd[1]: session-12.scope: Deactivated successfully. May 14 05:11:12.272120 systemd-logind[1573]: Session 12 logged out. Waiting for processes to exit. May 14 05:11:12.277798 systemd[1]: Started sshd@12-10.0.0.84:22-10.0.0.1:34766.service - OpenSSH per-connection server daemon (10.0.0.1:34766). May 14 05:11:12.278489 systemd-logind[1573]: Removed session 12. May 14 05:11:12.335443 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 34766 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:12.337746 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:12.343011 systemd-logind[1573]: New session 13 of user core. May 14 05:11:12.356889 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 05:11:12.467627 sshd[4092]: Connection closed by 10.0.0.1 port 34766 May 14 05:11:12.467953 sshd-session[4090]: pam_unix(sshd:session): session closed for user core May 14 05:11:12.472899 systemd[1]: sshd@12-10.0.0.84:22-10.0.0.1:34766.service: Deactivated successfully. May 14 05:11:12.475149 systemd[1]: session-13.scope: Deactivated successfully. May 14 05:11:12.476087 systemd-logind[1573]: Session 13 logged out. Waiting for processes to exit. May 14 05:11:12.477553 systemd-logind[1573]: Removed session 13. May 14 05:11:13.668224 kubelet[2831]: I0514 05:11:13.667830 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jkvhl" podStartSLOduration=3.784116447 podStartE2EDuration="26.667815605s" podCreationTimestamp="2025-05-14 05:10:47 +0000 UTC" firstStartedPulling="2025-05-14 05:10:48.953299524 +0000 UTC m=+23.571221717" lastFinishedPulling="2025-05-14 05:11:11.836998682 +0000 UTC m=+46.454920875" observedRunningTime="2025-05-14 05:11:13.667121671 +0000 UTC m=+48.285043864" watchObservedRunningTime="2025-05-14 05:11:13.667815605 +0000 UTC m=+48.285737798" May 14 05:11:13.761668 containerd[1598]: time="2025-05-14T05:11:13.761616034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"038e8e912ce25038576cc0267e20c1af3d3b605f82a4a3debd273f138d7f7e47\" id:\"ba3618e5225d57b32c82b09d013add59a41871720b96707c2fe467e127d85243\" pid:4213 exit_status:1 exited_at:{seconds:1747199473 nanos:761051042}" May 14 05:11:13.976236 containerd[1598]: time="2025-05-14T05:11:13.976117813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"038e8e912ce25038576cc0267e20c1af3d3b605f82a4a3debd273f138d7f7e47\" id:\"ff1aa7737a24b00d0eefa7f2569de4252a6fae1716744cf351ca370332fbfcfd\" pid:4269 exit_status:1 exited_at:{seconds:1747199473 nanos:975820404}" May 14 05:11:14.006486 systemd-networkd[1497]: vxlan.calico: Link UP May 14 05:11:14.006497 systemd-networkd[1497]: vxlan.calico: Gained carrier May 14 05:11:15.461550 containerd[1598]: time="2025-05-14T05:11:15.461480092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f69d9dff-p9fm4,Uid:8e850082-69fe-43b6-bdc2-a1e6e4c944d0,Namespace:calico-apiserver,Attempt:0,}" May 14 05:11:15.462596 containerd[1598]: time="2025-05-14T05:11:15.462560952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7w57m,Uid:fcc7a085-0eea-465f-a625-a16939de0db1,Namespace:calico-system,Attempt:0,}" May 14 05:11:15.608215 systemd-networkd[1497]: calid284334982c: Link UP May 14 05:11:15.609423 systemd-networkd[1497]: calid284334982c: Gained carrier May 14 05:11:15.628746 containerd[1598]: 2025-05-14 05:11:15.508 [INFO][4353] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f69d9dff--p9fm4-eth0 calico-apiserver-6f69d9dff- calico-apiserver 8e850082-69fe-43b6-bdc2-a1e6e4c944d0 766 0 2025-05-14 05:10:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f69d9dff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f69d9dff-p9fm4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid284334982c [] []}} ContainerID="99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" Namespace="calico-apiserver" Pod="calico-apiserver-6f69d9dff-p9fm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f69d9dff--p9fm4-" May 14 05:11:15.628746 containerd[1598]: 2025-05-14 05:11:15.509 [INFO][4353] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" Namespace="calico-apiserver" Pod="calico-apiserver-6f69d9dff-p9fm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f69d9dff--p9fm4-eth0" May 14 05:11:15.628746 containerd[1598]: 2025-05-14 05:11:15.568 [INFO][4381] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" HandleID="k8s-pod-network.99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" Workload="localhost-k8s-calico--apiserver--6f69d9dff--p9fm4-eth0" May 14 05:11:15.628951 containerd[1598]: 2025-05-14 05:11:15.578 [INFO][4381] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" HandleID="k8s-pod-network.99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" Workload="localhost-k8s-calico--apiserver--6f69d9dff--p9fm4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000299490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f69d9dff-p9fm4", "timestamp":"2025-05-14 05:11:15.568419287 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 05:11:15.628951 containerd[1598]: 2025-05-14 05:11:15.578 [INFO][4381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 05:11:15.628951 containerd[1598]: 2025-05-14 05:11:15.578 [INFO][4381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 05:11:15.628951 containerd[1598]: 2025-05-14 05:11:15.578 [INFO][4381] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 05:11:15.628951 containerd[1598]: 2025-05-14 05:11:15.579 [INFO][4381] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" host="localhost" May 14 05:11:15.628951 containerd[1598]: 2025-05-14 05:11:15.584 [INFO][4381] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 05:11:15.628951 containerd[1598]: 2025-05-14 05:11:15.587 [INFO][4381] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 05:11:15.628951 containerd[1598]: 2025-05-14 05:11:15.588 [INFO][4381] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 05:11:15.628951 containerd[1598]: 2025-05-14 05:11:15.590 [INFO][4381] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 05:11:15.628951 containerd[1598]: 2025-05-14 05:11:15.590 [INFO][4381] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" host="localhost" May 14 05:11:15.629160 containerd[1598]: 2025-05-14 05:11:15.591 [INFO][4381] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6 May 14 05:11:15.629160 containerd[1598]: 2025-05-14 05:11:15.596 [INFO][4381] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" host="localhost" May 14 05:11:15.629160 containerd[1598]: 2025-05-14 05:11:15.599 [INFO][4381] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" host="localhost" May 14 05:11:15.629160 containerd[1598]: 2025-05-14 05:11:15.599 [INFO][4381] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" host="localhost" May 14 05:11:15.629160 containerd[1598]: 2025-05-14 05:11:15.599 [INFO][4381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 05:11:15.629160 containerd[1598]: 2025-05-14 05:11:15.599 [INFO][4381] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" HandleID="k8s-pod-network.99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" Workload="localhost-k8s-calico--apiserver--6f69d9dff--p9fm4-eth0" May 14 05:11:15.629281 containerd[1598]: 2025-05-14 05:11:15.603 [INFO][4353] cni-plugin/k8s.go 386: Populated endpoint ContainerID="99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" Namespace="calico-apiserver" Pod="calico-apiserver-6f69d9dff-p9fm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f69d9dff--p9fm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f69d9dff--p9fm4-eth0", GenerateName:"calico-apiserver-6f69d9dff-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e850082-69fe-43b6-bdc2-a1e6e4c944d0", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f69d9dff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f69d9dff-p9fm4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid284334982c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:15.629333 containerd[1598]: 2025-05-14 05:11:15.603 [INFO][4353] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" Namespace="calico-apiserver" Pod="calico-apiserver-6f69d9dff-p9fm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f69d9dff--p9fm4-eth0" May 14 05:11:15.629333 containerd[1598]: 2025-05-14 05:11:15.603 [INFO][4353] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid284334982c ContainerID="99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" Namespace="calico-apiserver" Pod="calico-apiserver-6f69d9dff-p9fm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f69d9dff--p9fm4-eth0" May 14 05:11:15.629333 containerd[1598]: 2025-05-14 05:11:15.609 [INFO][4353] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" Namespace="calico-apiserver" Pod="calico-apiserver-6f69d9dff-p9fm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f69d9dff--p9fm4-eth0" May 14 05:11:15.629398 containerd[1598]: 2025-05-14 05:11:15.610 [INFO][4353] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" Namespace="calico-apiserver" Pod="calico-apiserver-6f69d9dff-p9fm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f69d9dff--p9fm4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f69d9dff--p9fm4-eth0", GenerateName:"calico-apiserver-6f69d9dff-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e850082-69fe-43b6-bdc2-a1e6e4c944d0", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f69d9dff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6", Pod:"calico-apiserver-6f69d9dff-p9fm4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid284334982c", MAC:"46:95:bf:94:99:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:15.629446 containerd[1598]: 2025-05-14 05:11:15.618 [INFO][4353] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" Namespace="calico-apiserver" Pod="calico-apiserver-6f69d9dff-p9fm4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f69d9dff--p9fm4-eth0" May 14 05:11:15.635514 systemd-networkd[1497]: cali6a666bd7175: Link UP May 14 05:11:15.636043 systemd-networkd[1497]: cali6a666bd7175: Gained carrier May 14 05:11:15.650202 containerd[1598]: 2025-05-14 05:11:15.508 [INFO][4362] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7w57m-eth0 csi-node-driver- calico-system fcc7a085-0eea-465f-a625-a16939de0db1 594 0 2025-05-14 05:10:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7w57m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6a666bd7175 [] []}} ContainerID="c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" Namespace="calico-system" Pod="csi-node-driver-7w57m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7w57m-" May 14 05:11:15.650202 containerd[1598]: 2025-05-14 05:11:15.509 [INFO][4362] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" Namespace="calico-system" Pod="csi-node-driver-7w57m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7w57m-eth0" May 14 05:11:15.650202 containerd[1598]: 2025-05-14 05:11:15.568 [INFO][4383] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" HandleID="k8s-pod-network.c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" Workload="localhost-k8s-csi--node--driver--7w57m-eth0" May 14 05:11:15.650409 containerd[1598]: 2025-05-14 05:11:15.578 [INFO][4383] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" HandleID="k8s-pod-network.c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" Workload="localhost-k8s-csi--node--driver--7w57m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050af0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7w57m", "timestamp":"2025-05-14 05:11:15.568418736 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 05:11:15.650409 containerd[1598]: 2025-05-14 05:11:15.578 [INFO][4383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 05:11:15.650409 containerd[1598]: 2025-05-14 05:11:15.599 [INFO][4383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 05:11:15.650409 containerd[1598]: 2025-05-14 05:11:15.600 [INFO][4383] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 05:11:15.650409 containerd[1598]: 2025-05-14 05:11:15.601 [INFO][4383] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" host="localhost" May 14 05:11:15.650409 containerd[1598]: 2025-05-14 05:11:15.605 [INFO][4383] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 05:11:15.650409 containerd[1598]: 2025-05-14 05:11:15.609 [INFO][4383] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 05:11:15.650409 containerd[1598]: 2025-05-14 05:11:15.611 [INFO][4383] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 05:11:15.650409 containerd[1598]: 2025-05-14 05:11:15.613 [INFO][4383] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 05:11:15.650409 containerd[1598]: 2025-05-14 05:11:15.613 [INFO][4383] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" host="localhost" May 14 05:11:15.650631 containerd[1598]: 2025-05-14 05:11:15.614 [INFO][4383] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f May 14 05:11:15.650631 containerd[1598]: 2025-05-14 05:11:15.620 [INFO][4383] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" host="localhost" May 14 05:11:15.650631 containerd[1598]: 2025-05-14 05:11:15.627 [INFO][4383] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" host="localhost" May 14 05:11:15.650631 containerd[1598]: 2025-05-14 05:11:15.627 [INFO][4383] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" host="localhost" May 14 05:11:15.650631 containerd[1598]: 2025-05-14 05:11:15.627 [INFO][4383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 05:11:15.650631 containerd[1598]: 2025-05-14 05:11:15.627 [INFO][4383] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" HandleID="k8s-pod-network.c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" Workload="localhost-k8s-csi--node--driver--7w57m-eth0" May 14 05:11:15.650804 containerd[1598]: 2025-05-14 05:11:15.632 [INFO][4362] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" Namespace="calico-system" Pod="csi-node-driver-7w57m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7w57m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7w57m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fcc7a085-0eea-465f-a625-a16939de0db1", ResourceVersion:"594", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7w57m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6a666bd7175", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:15.650804 containerd[1598]: 2025-05-14 05:11:15.632 [INFO][4362] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" Namespace="calico-system" Pod="csi-node-driver-7w57m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7w57m-eth0" May 14 05:11:15.650875 containerd[1598]: 2025-05-14 05:11:15.632 [INFO][4362] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a666bd7175 ContainerID="c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" Namespace="calico-system" Pod="csi-node-driver-7w57m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7w57m-eth0" May 14 05:11:15.650875 containerd[1598]: 2025-05-14 05:11:15.634 [INFO][4362] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" Namespace="calico-system" Pod="csi-node-driver-7w57m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7w57m-eth0" May 14 05:11:15.650920 containerd[1598]: 2025-05-14 05:11:15.634 [INFO][4362] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" Namespace="calico-system" Pod="csi-node-driver-7w57m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7w57m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7w57m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fcc7a085-0eea-465f-a625-a16939de0db1", ResourceVersion:"594", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f", Pod:"csi-node-driver-7w57m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6a666bd7175", MAC:"02:69:42:a2:63:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:15.650968 containerd[1598]: 2025-05-14 05:11:15.644 [INFO][4362] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" Namespace="calico-system" Pod="csi-node-driver-7w57m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7w57m-eth0" May 14 05:11:15.763622 containerd[1598]: time="2025-05-14T05:11:15.763450473Z" level=info msg="connecting to shim 99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6" address="unix:///run/containerd/s/8504b9a2b5a7163a28c18911a5fee63c60c9cb4cedb2e041e50fa8629a463baf" namespace=k8s.io protocol=ttrpc version=3 May 14 05:11:15.764135 containerd[1598]: time="2025-05-14T05:11:15.764089823Z" level=info msg="connecting to shim c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f" address="unix:///run/containerd/s/4ef32210e607513e747dfdd18e713e90cdf76caa8c33ba1d36b9e2fce2061771" namespace=k8s.io protocol=ttrpc version=3 May 14 05:11:15.793847 systemd[1]: Started cri-containerd-99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6.scope - libcontainer container 99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6. May 14 05:11:15.795471 systemd[1]: Started cri-containerd-c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f.scope - libcontainer container c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f. May 14 05:11:15.806753 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 05:11:15.811215 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 05:11:15.825293 containerd[1598]: time="2025-05-14T05:11:15.825261328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7w57m,Uid:fcc7a085-0eea-465f-a625-a16939de0db1,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f\"" May 14 05:11:15.827742 containerd[1598]: time="2025-05-14T05:11:15.827693526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 05:11:15.838900 containerd[1598]: time="2025-05-14T05:11:15.838865181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f69d9dff-p9fm4,Uid:8e850082-69fe-43b6-bdc2-a1e6e4c944d0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6\"" May 14 05:11:15.904868 systemd-networkd[1497]: vxlan.calico: Gained IPv6LL May 14 05:11:16.462102 containerd[1598]: time="2025-05-14T05:11:16.462049712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79ccf86b7c-hxw7c,Uid:36ac7d47-9f28-487c-9914-f18cfeea4ed5,Namespace:calico-system,Attempt:0,}" May 14 05:11:16.462505 containerd[1598]: time="2025-05-14T05:11:16.462049802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845d69865c-92j47,Uid:31e9255f-87a3-4bf3-a184-3d845602f03d,Namespace:calico-apiserver,Attempt:0,}" May 14 05:11:16.576642 systemd-networkd[1497]: cali8fa37478f27: Link UP May 14 05:11:16.577139 systemd-networkd[1497]: cali8fa37478f27: Gained carrier May 14 05:11:16.594683 containerd[1598]: 2025-05-14 05:11:16.501 [INFO][4515] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0 calico-kube-controllers-79ccf86b7c- calico-system 36ac7d47-9f28-487c-9914-f18cfeea4ed5 770 0 2025-05-14 05:10:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79ccf86b7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-79ccf86b7c-hxw7c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8fa37478f27 [] []}} ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" Namespace="calico-system" Pod="calico-kube-controllers-79ccf86b7c-hxw7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-" May 14 05:11:16.594683 containerd[1598]: 2025-05-14 05:11:16.501 [INFO][4515] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" Namespace="calico-system" Pod="calico-kube-controllers-79ccf86b7c-hxw7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0" May 14 05:11:16.594683 containerd[1598]: 2025-05-14 05:11:16.537 [INFO][4545] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" HandleID="k8s-pod-network.9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" Workload="localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0" May 14 05:11:16.595003 containerd[1598]: 2025-05-14 05:11:16.548 [INFO][4545] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" HandleID="k8s-pod-network.9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" Workload="localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000511b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-79ccf86b7c-hxw7c", "timestamp":"2025-05-14 05:11:16.53781682 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 05:11:16.595003 containerd[1598]: 2025-05-14 05:11:16.548 [INFO][4545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 05:11:16.595003 containerd[1598]: 2025-05-14 05:11:16.548 [INFO][4545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 05:11:16.595003 containerd[1598]: 2025-05-14 05:11:16.548 [INFO][4545] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 05:11:16.595003 containerd[1598]: 2025-05-14 05:11:16.549 [INFO][4545] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" host="localhost" May 14 05:11:16.595003 containerd[1598]: 2025-05-14 05:11:16.552 [INFO][4545] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 05:11:16.595003 containerd[1598]: 2025-05-14 05:11:16.555 [INFO][4545] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 05:11:16.595003 containerd[1598]: 2025-05-14 05:11:16.556 [INFO][4545] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 05:11:16.595003 containerd[1598]: 2025-05-14 05:11:16.558 [INFO][4545] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 05:11:16.595003 containerd[1598]: 2025-05-14 05:11:16.558 [INFO][4545] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" host="localhost" May 14 05:11:16.595313 containerd[1598]: 2025-05-14 05:11:16.560 [INFO][4545] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259 May 14 05:11:16.595313 containerd[1598]: 2025-05-14 05:11:16.563 [INFO][4545] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" host="localhost" May 14 05:11:16.595313 containerd[1598]: 2025-05-14 05:11:16.569 [INFO][4545] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" host="localhost" May 14 05:11:16.595313 containerd[1598]: 2025-05-14 05:11:16.570 [INFO][4545] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" host="localhost" May 14 05:11:16.595313 containerd[1598]: 2025-05-14 05:11:16.570 [INFO][4545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 05:11:16.595313 containerd[1598]: 2025-05-14 05:11:16.570 [INFO][4545] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" HandleID="k8s-pod-network.9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" Workload="localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0" May 14 05:11:16.597258 containerd[1598]: 2025-05-14 05:11:16.573 [INFO][4515] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" Namespace="calico-system" Pod="calico-kube-controllers-79ccf86b7c-hxw7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0", GenerateName:"calico-kube-controllers-79ccf86b7c-", Namespace:"calico-system", SelfLink:"", UID:"36ac7d47-9f28-487c-9914-f18cfeea4ed5", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 10, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79ccf86b7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-79ccf86b7c-hxw7c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8fa37478f27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:16.597320 containerd[1598]: 2025-05-14 05:11:16.574 [INFO][4515] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" Namespace="calico-system" Pod="calico-kube-controllers-79ccf86b7c-hxw7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0" May 14 05:11:16.597320 containerd[1598]: 2025-05-14 05:11:16.574 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8fa37478f27 ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" Namespace="calico-system" Pod="calico-kube-controllers-79ccf86b7c-hxw7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0" May 14 05:11:16.597320 containerd[1598]: 2025-05-14 05:11:16.578 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" Namespace="calico-system" Pod="calico-kube-controllers-79ccf86b7c-hxw7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0" May 14 05:11:16.597387 containerd[1598]: 2025-05-14 05:11:16.578 [INFO][4515] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" Namespace="calico-system" Pod="calico-kube-controllers-79ccf86b7c-hxw7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0", GenerateName:"calico-kube-controllers-79ccf86b7c-", Namespace:"calico-system", SelfLink:"", UID:"36ac7d47-9f28-487c-9914-f18cfeea4ed5", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 10, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79ccf86b7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259", Pod:"calico-kube-controllers-79ccf86b7c-hxw7c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8fa37478f27", MAC:"52:28:ee:40:2e:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:16.597437 containerd[1598]: 2025-05-14 05:11:16.590 [INFO][4515] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" Namespace="calico-system" Pod="calico-kube-controllers-79ccf86b7c-hxw7c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0" May 14 05:11:16.611533 systemd-networkd[1497]: califbe33fdce9e: Link UP May 14 05:11:16.611794 systemd-networkd[1497]: califbe33fdce9e: Gained carrier May 14 05:11:16.629403 containerd[1598]: time="2025-05-14T05:11:16.629352786Z" level=info msg="connecting to shim 9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" address="unix:///run/containerd/s/ad48fbdca1bf854bb4525cfaedd9b812ae44fb08aab5c94c79854f43b1a1bf11" namespace=k8s.io protocol=ttrpc version=3 May 14 05:11:16.629850 containerd[1598]: 2025-05-14 05:11:16.512 [INFO][4527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--845d69865c--92j47-eth0 calico-apiserver-845d69865c- calico-apiserver 31e9255f-87a3-4bf3-a184-3d845602f03d 768 0 2025-05-14 05:10:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:845d69865c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-845d69865c-92j47 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califbe33fdce9e [] []}} ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-92j47" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--92j47-" May 14 05:11:16.629850 containerd[1598]: 2025-05-14 05:11:16.512 [INFO][4527] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-92j47" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--92j47-eth0" May 14 05:11:16.629850 containerd[1598]: 2025-05-14 05:11:16.542 [INFO][4551] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" HandleID="k8s-pod-network.ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" Workload="localhost-k8s-calico--apiserver--845d69865c--92j47-eth0" May 14 05:11:16.630004 containerd[1598]: 2025-05-14 05:11:16.549 [INFO][4551] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" HandleID="k8s-pod-network.ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" Workload="localhost-k8s-calico--apiserver--845d69865c--92j47-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005257d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-845d69865c-92j47", "timestamp":"2025-05-14 05:11:16.542605714 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 05:11:16.630004 containerd[1598]: 2025-05-14 05:11:16.549 [INFO][4551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 05:11:16.630004 containerd[1598]: 2025-05-14 05:11:16.570 [INFO][4551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 05:11:16.630004 containerd[1598]: 2025-05-14 05:11:16.570 [INFO][4551] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 05:11:16.630004 containerd[1598]: 2025-05-14 05:11:16.572 [INFO][4551] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" host="localhost" May 14 05:11:16.630004 containerd[1598]: 2025-05-14 05:11:16.580 [INFO][4551] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 05:11:16.630004 containerd[1598]: 2025-05-14 05:11:16.584 [INFO][4551] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 05:11:16.630004 containerd[1598]: 2025-05-14 05:11:16.587 [INFO][4551] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 05:11:16.630004 containerd[1598]: 2025-05-14 05:11:16.592 [INFO][4551] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 05:11:16.630004 containerd[1598]: 2025-05-14 05:11:16.592 [INFO][4551] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" host="localhost" May 14 05:11:16.630251 containerd[1598]: 2025-05-14 05:11:16.594 [INFO][4551] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0 May 14 05:11:16.630251 containerd[1598]: 2025-05-14 05:11:16.598 [INFO][4551] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" host="localhost" May 14 05:11:16.630251 containerd[1598]: 2025-05-14 05:11:16.606 [INFO][4551] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" host="localhost" May 14 05:11:16.630251 containerd[1598]: 2025-05-14 05:11:16.606 [INFO][4551] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" host="localhost" May 14 05:11:16.630251 containerd[1598]: 2025-05-14 05:11:16.606 [INFO][4551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 05:11:16.630251 containerd[1598]: 2025-05-14 05:11:16.606 [INFO][4551] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" HandleID="k8s-pod-network.ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" Workload="localhost-k8s-calico--apiserver--845d69865c--92j47-eth0" May 14 05:11:16.630376 containerd[1598]: 2025-05-14 05:11:16.609 [INFO][4527] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-92j47" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--92j47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--845d69865c--92j47-eth0", GenerateName:"calico-apiserver-845d69865c-", Namespace:"calico-apiserver", SelfLink:"", UID:"31e9255f-87a3-4bf3-a184-3d845602f03d", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845d69865c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-845d69865c-92j47", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califbe33fdce9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:16.630434 containerd[1598]: 2025-05-14 05:11:16.609 [INFO][4527] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-92j47" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--92j47-eth0" May 14 05:11:16.630434 containerd[1598]: 2025-05-14 05:11:16.609 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbe33fdce9e ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-92j47" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--92j47-eth0" May 14 05:11:16.630434 containerd[1598]: 2025-05-14 05:11:16.611 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-92j47" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--92j47-eth0" May 14 05:11:16.630496 containerd[1598]: 2025-05-14 05:11:16.611 [INFO][4527] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-92j47" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--92j47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--845d69865c--92j47-eth0", GenerateName:"calico-apiserver-845d69865c-", Namespace:"calico-apiserver", SelfLink:"", UID:"31e9255f-87a3-4bf3-a184-3d845602f03d", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845d69865c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0", Pod:"calico-apiserver-845d69865c-92j47", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califbe33fdce9e", MAC:"ae:b2:70:18:06:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:16.630548 containerd[1598]: 2025-05-14 05:11:16.620 [INFO][4527] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-92j47" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--92j47-eth0" May 14 05:11:16.657016 containerd[1598]: time="2025-05-14T05:11:16.656968303Z" level=info msg="connecting to shim ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" address="unix:///run/containerd/s/732d1b199a1ab1bc4f9f3ed974e6fefc52cfe94e29fbb30e505b6bff77c2221b" namespace=k8s.io protocol=ttrpc version=3 May 14 05:11:16.657888 systemd[1]: Started cri-containerd-9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259.scope - libcontainer container 9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259. May 14 05:11:16.672783 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 05:11:16.681845 systemd[1]: Started cri-containerd-ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0.scope - libcontainer container ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0. May 14 05:11:16.696313 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 05:11:16.704296 containerd[1598]: time="2025-05-14T05:11:16.704242936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79ccf86b7c-hxw7c,Uid:36ac7d47-9f28-487c-9914-f18cfeea4ed5,Namespace:calico-system,Attempt:0,} returns sandbox id \"9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259\"" May 14 05:11:16.727367 containerd[1598]: time="2025-05-14T05:11:16.727254457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845d69865c-92j47,Uid:31e9255f-87a3-4bf3-a184-3d845602f03d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0\"" May 14 05:11:16.864855 systemd-networkd[1497]: cali6a666bd7175: Gained IPv6LL May 14 05:11:17.312937 systemd-networkd[1497]: calid284334982c: Gained IPv6LL May 14 05:11:17.383653 containerd[1598]: time="2025-05-14T05:11:17.383583814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:17.384364 containerd[1598]: time="2025-05-14T05:11:17.384317582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 14 05:11:17.396947 containerd[1598]: time="2025-05-14T05:11:17.396906225Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:17.399457 containerd[1598]: time="2025-05-14T05:11:17.399410709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:17.400105 containerd[1598]: time="2025-05-14T05:11:17.400044478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.571996046s" May 14 05:11:17.400105 containerd[1598]: time="2025-05-14T05:11:17.400097358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 14 05:11:17.400971 containerd[1598]: time="2025-05-14T05:11:17.400948997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 05:11:17.408634 containerd[1598]: time="2025-05-14T05:11:17.408582794Z" level=info msg="CreateContainer within sandbox \"c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 05:11:17.425675 containerd[1598]: time="2025-05-14T05:11:17.425624841Z" level=info msg="Container c1bf0a80252c0cd4a8beb86c91ff6c7fc438e0ab80d6b38f4f1d981e757394ee: CDI devices from CRI Config.CDIDevices: []" May 14 05:11:17.451017 containerd[1598]: time="2025-05-14T05:11:17.450970491Z" level=info msg="CreateContainer within sandbox \"c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c1bf0a80252c0cd4a8beb86c91ff6c7fc438e0ab80d6b38f4f1d981e757394ee\"" May 14 05:11:17.451527 containerd[1598]: time="2025-05-14T05:11:17.451494075Z" level=info msg="StartContainer for \"c1bf0a80252c0cd4a8beb86c91ff6c7fc438e0ab80d6b38f4f1d981e757394ee\"" May 14 05:11:17.453060 containerd[1598]: time="2025-05-14T05:11:17.453018507Z" level=info msg="connecting to shim c1bf0a80252c0cd4a8beb86c91ff6c7fc438e0ab80d6b38f4f1d981e757394ee" address="unix:///run/containerd/s/4ef32210e607513e747dfdd18e713e90cdf76caa8c33ba1d36b9e2fce2061771" protocol=ttrpc version=3 May 14 05:11:17.462179 containerd[1598]: time="2025-05-14T05:11:17.462146771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845d69865c-7txs2,Uid:518875d2-b6d3-47f4-be27-8df3a2cdbf54,Namespace:calico-apiserver,Attempt:0,}" May 14 05:11:17.499270 systemd[1]: Started cri-containerd-c1bf0a80252c0cd4a8beb86c91ff6c7fc438e0ab80d6b38f4f1d981e757394ee.scope - libcontainer container c1bf0a80252c0cd4a8beb86c91ff6c7fc438e0ab80d6b38f4f1d981e757394ee. May 14 05:11:17.501019 systemd[1]: Started sshd@13-10.0.0.84:22-10.0.0.1:34778.service - OpenSSH per-connection server daemon (10.0.0.1:34778). May 14 05:11:17.593730 sshd[4708]: Accepted publickey for core from 10.0.0.1 port 34778 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:17.595337 sshd-session[4708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:17.599811 systemd-logind[1573]: New session 14 of user core. May 14 05:11:17.600463 containerd[1598]: time="2025-05-14T05:11:17.600404630Z" level=info msg="StartContainer for \"c1bf0a80252c0cd4a8beb86c91ff6c7fc438e0ab80d6b38f4f1d981e757394ee\" returns successfully" May 14 05:11:17.608841 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 05:11:17.675334 systemd-networkd[1497]: calic65ff570fa4: Link UP May 14 05:11:17.677550 systemd-networkd[1497]: calic65ff570fa4: Gained carrier May 14 05:11:17.741839 containerd[1598]: 2025-05-14 05:11:17.508 [INFO][4694] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0 calico-apiserver-845d69865c- calico-apiserver 518875d2-b6d3-47f4-be27-8df3a2cdbf54 769 0 2025-05-14 05:10:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:845d69865c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-845d69865c-7txs2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic65ff570fa4 [] []}} ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-7txs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--7txs2-" May 14 05:11:17.741839 containerd[1598]: 2025-05-14 05:11:17.509 [INFO][4694] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-7txs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0" May 14 05:11:17.741839 containerd[1598]: 2025-05-14 05:11:17.541 [INFO][4718] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" HandleID="k8s-pod-network.41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" Workload="localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0" May 14 05:11:17.742092 containerd[1598]: 2025-05-14 05:11:17.550 [INFO][4718] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" HandleID="k8s-pod-network.41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" Workload="localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011f6a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-845d69865c-7txs2", "timestamp":"2025-05-14 05:11:17.541032534 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 05:11:17.742092 containerd[1598]: 2025-05-14 05:11:17.551 [INFO][4718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 05:11:17.742092 containerd[1598]: 2025-05-14 05:11:17.551 [INFO][4718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 05:11:17.742092 containerd[1598]: 2025-05-14 05:11:17.551 [INFO][4718] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 05:11:17.742092 containerd[1598]: 2025-05-14 05:11:17.553 [INFO][4718] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" host="localhost" May 14 05:11:17.742092 containerd[1598]: 2025-05-14 05:11:17.575 [INFO][4718] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 05:11:17.742092 containerd[1598]: 2025-05-14 05:11:17.580 [INFO][4718] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 05:11:17.742092 containerd[1598]: 2025-05-14 05:11:17.581 [INFO][4718] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 05:11:17.742092 containerd[1598]: 2025-05-14 05:11:17.583 [INFO][4718] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 05:11:17.742092 containerd[1598]: 2025-05-14 05:11:17.583 [INFO][4718] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" host="localhost" May 14 05:11:17.742366 containerd[1598]: 2025-05-14 05:11:17.584 [INFO][4718] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf May 14 05:11:17.742366 containerd[1598]: 2025-05-14 05:11:17.609 [INFO][4718] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" host="localhost" May 14 05:11:17.742366 containerd[1598]: 2025-05-14 05:11:17.664 [INFO][4718] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" host="localhost" May 14 05:11:17.742366 containerd[1598]: 2025-05-14 05:11:17.664 [INFO][4718] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" host="localhost" May 14 05:11:17.742366 containerd[1598]: 2025-05-14 05:11:17.665 [INFO][4718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 05:11:17.742366 containerd[1598]: 2025-05-14 05:11:17.665 [INFO][4718] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" HandleID="k8s-pod-network.41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" Workload="localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0" May 14 05:11:17.742541 containerd[1598]: 2025-05-14 05:11:17.668 [INFO][4694] cni-plugin/k8s.go 386: Populated endpoint ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-7txs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0", GenerateName:"calico-apiserver-845d69865c-", Namespace:"calico-apiserver", SelfLink:"", UID:"518875d2-b6d3-47f4-be27-8df3a2cdbf54", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845d69865c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-845d69865c-7txs2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic65ff570fa4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:17.742612 containerd[1598]: 2025-05-14 05:11:17.668 [INFO][4694] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-7txs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0" May 14 05:11:17.742612 containerd[1598]: 2025-05-14 05:11:17.668 [INFO][4694] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic65ff570fa4 ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-7txs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0" May 14 05:11:17.742612 containerd[1598]: 2025-05-14 05:11:17.674 [INFO][4694] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-7txs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0" May 14 05:11:17.742737 containerd[1598]: 2025-05-14 05:11:17.675 [INFO][4694] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-7txs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0", GenerateName:"calico-apiserver-845d69865c-", Namespace:"calico-apiserver", SelfLink:"", UID:"518875d2-b6d3-47f4-be27-8df3a2cdbf54", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845d69865c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf", Pod:"calico-apiserver-845d69865c-7txs2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic65ff570fa4", MAC:"02:c4:c5:02:b9:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:17.742804 containerd[1598]: 2025-05-14 05:11:17.732 [INFO][4694] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" Namespace="calico-apiserver" Pod="calico-apiserver-845d69865c-7txs2" WorkloadEndpoint="localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0" May 14 05:11:17.744678 sshd[4738]: Connection closed by 10.0.0.1 port 34778 May 14 05:11:17.745060 sshd-session[4708]: pam_unix(sshd:session): session closed for user core May 14 05:11:17.751238 systemd[1]: sshd@13-10.0.0.84:22-10.0.0.1:34778.service: Deactivated successfully. May 14 05:11:17.753717 systemd[1]: session-14.scope: Deactivated successfully. May 14 05:11:17.754679 systemd-logind[1573]: Session 14 logged out. Waiting for processes to exit. May 14 05:11:17.756990 systemd-logind[1573]: Removed session 14. May 14 05:11:17.786582 containerd[1598]: time="2025-05-14T05:11:17.786537230Z" level=info msg="connecting to shim 41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" address="unix:///run/containerd/s/926017d9dab2e561e55f037e09e1be78bf8eae72d6126e0e8e315ab9d6a453f9" namespace=k8s.io protocol=ttrpc version=3 May 14 05:11:17.814983 systemd[1]: Started cri-containerd-41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf.scope - libcontainer container 41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf. May 14 05:11:17.831762 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 05:11:17.864228 containerd[1598]: time="2025-05-14T05:11:17.864197852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845d69865c-7txs2,Uid:518875d2-b6d3-47f4-be27-8df3a2cdbf54,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf\"" May 14 05:11:18.208972 systemd-networkd[1497]: califbe33fdce9e: Gained IPv6LL May 14 05:11:18.336896 systemd-networkd[1497]: cali8fa37478f27: Gained IPv6LL May 14 05:11:18.461555 containerd[1598]: time="2025-05-14T05:11:18.461430390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xz28m,Uid:c417c481-47ef-43ad-b365-2e74a4c60b8f,Namespace:kube-system,Attempt:0,}" May 14 05:11:18.566284 systemd-networkd[1497]: cali8b7201b450d: Link UP May 14 05:11:18.567177 systemd-networkd[1497]: cali8b7201b450d: Gained carrier May 14 05:11:18.579587 containerd[1598]: 2025-05-14 05:11:18.495 [INFO][4811] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--xz28m-eth0 coredns-7db6d8ff4d- kube-system c417c481-47ef-43ad-b365-2e74a4c60b8f 767 0 2025-05-14 05:10:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-xz28m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8b7201b450d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xz28m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xz28m-" May 14 05:11:18.579587 containerd[1598]: 2025-05-14 05:11:18.495 [INFO][4811] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xz28m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xz28m-eth0" May 14 05:11:18.579587 containerd[1598]: 2025-05-14 05:11:18.528 [INFO][4825] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" HandleID="k8s-pod-network.4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" Workload="localhost-k8s-coredns--7db6d8ff4d--xz28m-eth0" May 14 05:11:18.579985 containerd[1598]: 2025-05-14 05:11:18.536 [INFO][4825] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" HandleID="k8s-pod-network.4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" Workload="localhost-k8s-coredns--7db6d8ff4d--xz28m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dcaf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-xz28m", "timestamp":"2025-05-14 05:11:18.528981697 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 05:11:18.579985 containerd[1598]: 2025-05-14 05:11:18.536 [INFO][4825] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 05:11:18.579985 containerd[1598]: 2025-05-14 05:11:18.536 [INFO][4825] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 05:11:18.579985 containerd[1598]: 2025-05-14 05:11:18.536 [INFO][4825] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 05:11:18.579985 containerd[1598]: 2025-05-14 05:11:18.537 [INFO][4825] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" host="localhost" May 14 05:11:18.579985 containerd[1598]: 2025-05-14 05:11:18.541 [INFO][4825] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 05:11:18.579985 containerd[1598]: 2025-05-14 05:11:18.545 [INFO][4825] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 05:11:18.579985 containerd[1598]: 2025-05-14 05:11:18.547 [INFO][4825] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 05:11:18.579985 containerd[1598]: 2025-05-14 05:11:18.550 [INFO][4825] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 05:11:18.579985 containerd[1598]: 2025-05-14 05:11:18.550 [INFO][4825] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" host="localhost" May 14 05:11:18.580212 containerd[1598]: 2025-05-14 05:11:18.552 [INFO][4825] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124 May 14 05:11:18.580212 containerd[1598]: 2025-05-14 05:11:18.556 [INFO][4825] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" host="localhost" May 14 05:11:18.580212 containerd[1598]: 2025-05-14 05:11:18.561 [INFO][4825] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" host="localhost" May 14 05:11:18.580212 containerd[1598]: 2025-05-14 05:11:18.561 [INFO][4825] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" host="localhost" May 14 05:11:18.580212 containerd[1598]: 2025-05-14 05:11:18.561 [INFO][4825] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 05:11:18.580212 containerd[1598]: 2025-05-14 05:11:18.561 [INFO][4825] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" HandleID="k8s-pod-network.4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" Workload="localhost-k8s-coredns--7db6d8ff4d--xz28m-eth0" May 14 05:11:18.580323 containerd[1598]: 2025-05-14 05:11:18.564 [INFO][4811] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xz28m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xz28m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xz28m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c417c481-47ef-43ad-b365-2e74a4c60b8f", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-xz28m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b7201b450d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:18.580382 containerd[1598]: 2025-05-14 05:11:18.564 [INFO][4811] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xz28m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xz28m-eth0" May 14 05:11:18.580382 containerd[1598]: 2025-05-14 05:11:18.564 [INFO][4811] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b7201b450d ContainerID="4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xz28m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xz28m-eth0" May 14 05:11:18.580382 containerd[1598]: 2025-05-14 05:11:18.567 [INFO][4811] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xz28m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xz28m-eth0" May 14 05:11:18.580450 containerd[1598]: 2025-05-14 05:11:18.567 [INFO][4811] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xz28m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xz28m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xz28m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c417c481-47ef-43ad-b365-2e74a4c60b8f", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124", Pod:"coredns-7db6d8ff4d-xz28m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b7201b450d", MAC:"e6:c5:f1:c6:5d:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:18.580450 containerd[1598]: 2025-05-14 05:11:18.575 [INFO][4811] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xz28m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xz28m-eth0" May 14 05:11:18.603140 containerd[1598]: time="2025-05-14T05:11:18.603095657Z" level=info msg="connecting to shim 4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124" address="unix:///run/containerd/s/720892d5e415455f3f9eb30a0d8ef973634820da28050d57af8d4e7419c119aa" namespace=k8s.io protocol=ttrpc version=3 May 14 05:11:18.632912 systemd[1]: Started cri-containerd-4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124.scope - libcontainer container 4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124. May 14 05:11:18.647590 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 05:11:18.679371 containerd[1598]: time="2025-05-14T05:11:18.679291498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xz28m,Uid:c417c481-47ef-43ad-b365-2e74a4c60b8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124\"" May 14 05:11:18.682514 containerd[1598]: time="2025-05-14T05:11:18.682478483Z" level=info msg="CreateContainer within sandbox \"4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 05:11:18.696729 containerd[1598]: time="2025-05-14T05:11:18.694578195Z" level=info msg="Container 2ac2834305c5f6dd2fa3c1123c87e78eb9f84483b9c37eef696b835e503283d5: CDI devices from CRI Config.CDIDevices: []" May 14 05:11:18.700149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2082577959.mount: Deactivated successfully. May 14 05:11:18.701747 containerd[1598]: time="2025-05-14T05:11:18.701679901Z" level=info msg="CreateContainer within sandbox \"4e0a4faac74fbbb76d5121a5063d37c33e31e4a9c97fd4bc34fe40819dae5124\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ac2834305c5f6dd2fa3c1123c87e78eb9f84483b9c37eef696b835e503283d5\"" May 14 05:11:18.702328 containerd[1598]: time="2025-05-14T05:11:18.702292843Z" level=info msg="StartContainer for \"2ac2834305c5f6dd2fa3c1123c87e78eb9f84483b9c37eef696b835e503283d5\"" May 14 05:11:18.703255 containerd[1598]: time="2025-05-14T05:11:18.703219873Z" level=info msg="connecting to shim 2ac2834305c5f6dd2fa3c1123c87e78eb9f84483b9c37eef696b835e503283d5" address="unix:///run/containerd/s/720892d5e415455f3f9eb30a0d8ef973634820da28050d57af8d4e7419c119aa" protocol=ttrpc version=3 May 14 05:11:18.724842 systemd[1]: Started cri-containerd-2ac2834305c5f6dd2fa3c1123c87e78eb9f84483b9c37eef696b835e503283d5.scope - libcontainer container 2ac2834305c5f6dd2fa3c1123c87e78eb9f84483b9c37eef696b835e503283d5. May 14 05:11:18.761053 containerd[1598]: time="2025-05-14T05:11:18.761003567Z" level=info msg="StartContainer for \"2ac2834305c5f6dd2fa3c1123c87e78eb9f84483b9c37eef696b835e503283d5\" returns successfully" May 14 05:11:18.939597 kubelet[2831]: I0514 05:11:18.939527 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xz28m" podStartSLOduration=37.939509297 podStartE2EDuration="37.939509297s" podCreationTimestamp="2025-05-14 05:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:11:18.939411783 +0000 UTC m=+53.557333976" watchObservedRunningTime="2025-05-14 05:11:18.939509297 +0000 UTC m=+53.557431480" May 14 05:11:19.394730 containerd[1598]: time="2025-05-14T05:11:19.394619194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"038e8e912ce25038576cc0267e20c1af3d3b605f82a4a3debd273f138d7f7e47\" id:\"aa75868f3ef499a42b0459fe877891d5725eac1318d65525741a73f395e1eb81\" pid:4949 exited_at:{seconds:1747199479 nanos:394066026}" May 14 05:11:19.461919 containerd[1598]: time="2025-05-14T05:11:19.461854117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fmcx7,Uid:1d5291ce-116c-4328-abd5-d097a4637b78,Namespace:kube-system,Attempt:0,}" May 14 05:11:19.552938 systemd-networkd[1497]: calic65ff570fa4: Gained IPv6LL May 14 05:11:19.809857 systemd-networkd[1497]: caliace73c5bf2c: Link UP May 14 05:11:19.810504 systemd-networkd[1497]: caliace73c5bf2c: Gained carrier May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.730 [INFO][4962] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--fmcx7-eth0 coredns-7db6d8ff4d- kube-system 1d5291ce-116c-4328-abd5-d097a4637b78 763 0 2025-05-14 05:10:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-fmcx7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliace73c5bf2c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmcx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmcx7-" May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.732 [INFO][4962] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmcx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmcx7-eth0" May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.763 [INFO][4980] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" HandleID="k8s-pod-network.e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" Workload="localhost-k8s-coredns--7db6d8ff4d--fmcx7-eth0" May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.771 [INFO][4980] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" HandleID="k8s-pod-network.e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" Workload="localhost-k8s-coredns--7db6d8ff4d--fmcx7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00053b1b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-fmcx7", "timestamp":"2025-05-14 05:11:19.76385483 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.771 [INFO][4980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.771 [INFO][4980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.771 [INFO][4980] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.773 [INFO][4980] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" host="localhost" May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.781 [INFO][4980] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.787 [INFO][4980] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.789 [INFO][4980] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.791 [INFO][4980] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.791 [INFO][4980] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" host="localhost" May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.792 [INFO][4980] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59 May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.796 [INFO][4980] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" host="localhost" May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.802 [INFO][4980] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" host="localhost" May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.802 [INFO][4980] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" host="localhost" May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.802 [INFO][4980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 05:11:19.825003 containerd[1598]: 2025-05-14 05:11:19.802 [INFO][4980] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" HandleID="k8s-pod-network.e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" Workload="localhost-k8s-coredns--7db6d8ff4d--fmcx7-eth0" May 14 05:11:19.826004 containerd[1598]: 2025-05-14 05:11:19.806 [INFO][4962] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmcx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmcx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--fmcx7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1d5291ce-116c-4328-abd5-d097a4637b78", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-fmcx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliace73c5bf2c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:19.826004 containerd[1598]: 2025-05-14 05:11:19.806 [INFO][4962] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmcx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmcx7-eth0" May 14 05:11:19.826004 containerd[1598]: 2025-05-14 05:11:19.806 [INFO][4962] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliace73c5bf2c ContainerID="e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmcx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmcx7-eth0" May 14 05:11:19.826004 containerd[1598]: 2025-05-14 05:11:19.810 [INFO][4962] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmcx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmcx7-eth0" May 14 05:11:19.826004 containerd[1598]: 2025-05-14 05:11:19.810 [INFO][4962] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmcx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmcx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--fmcx7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1d5291ce-116c-4328-abd5-d097a4637b78", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59", Pod:"coredns-7db6d8ff4d-fmcx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliace73c5bf2c", MAC:"ca:66:45:27:69:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:19.826004 containerd[1598]: 2025-05-14 05:11:19.818 [INFO][4962] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fmcx7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--fmcx7-eth0" May 14 05:11:19.880728 containerd[1598]: time="2025-05-14T05:11:19.880161259Z" level=info msg="connecting to shim e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59" address="unix:///run/containerd/s/a414b0f9b9c6dbb770e986df11c3cc702f1006dc885c0b785b6664ca40c63dac" namespace=k8s.io protocol=ttrpc version=3 May 14 05:11:19.915583 systemd[1]: Started cri-containerd-e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59.scope - libcontainer container e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59. May 14 05:11:19.937746 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 05:11:20.094088 containerd[1598]: time="2025-05-14T05:11:20.093951109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fmcx7,Uid:1d5291ce-116c-4328-abd5-d097a4637b78,Namespace:kube-system,Attempt:0,} returns sandbox id \"e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59\"" May 14 05:11:20.097604 containerd[1598]: time="2025-05-14T05:11:20.097569333Z" level=info msg="CreateContainer within sandbox \"e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 05:11:20.118057 containerd[1598]: time="2025-05-14T05:11:20.113198298Z" level=info msg="Container 044474e232f16c9958343e1a9d66c968c65041e2bf3cd339b2d571297feacfda: CDI devices from CRI Config.CDIDevices: []" May 14 05:11:20.126699 containerd[1598]: time="2025-05-14T05:11:20.126666076Z" level=info msg="CreateContainer within sandbox \"e037ea2e16e544bcf91030dd942bd52c26ebec58f113f9dc237ef83eed6a5c59\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"044474e232f16c9958343e1a9d66c968c65041e2bf3cd339b2d571297feacfda\"" May 14 05:11:20.127404 containerd[1598]: time="2025-05-14T05:11:20.127378905Z" level=info msg="StartContainer for \"044474e232f16c9958343e1a9d66c968c65041e2bf3cd339b2d571297feacfda\"" May 14 05:11:20.131231 containerd[1598]: time="2025-05-14T05:11:20.131180673Z" level=info msg="connecting to shim 044474e232f16c9958343e1a9d66c968c65041e2bf3cd339b2d571297feacfda" address="unix:///run/containerd/s/a414b0f9b9c6dbb770e986df11c3cc702f1006dc885c0b785b6664ca40c63dac" protocol=ttrpc version=3 May 14 05:11:20.160248 systemd[1]: Started cri-containerd-044474e232f16c9958343e1a9d66c968c65041e2bf3cd339b2d571297feacfda.scope - libcontainer container 044474e232f16c9958343e1a9d66c968c65041e2bf3cd339b2d571297feacfda. May 14 05:11:20.199165 containerd[1598]: time="2025-05-14T05:11:20.199127966Z" level=info msg="StartContainer for \"044474e232f16c9958343e1a9d66c968c65041e2bf3cd339b2d571297feacfda\" returns successfully" May 14 05:11:20.257342 systemd-networkd[1497]: cali8b7201b450d: Gained IPv6LL May 14 05:11:20.331366 containerd[1598]: time="2025-05-14T05:11:20.331311380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:20.332245 containerd[1598]: time="2025-05-14T05:11:20.332200891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 14 05:11:20.333452 containerd[1598]: time="2025-05-14T05:11:20.333400082Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:20.335469 containerd[1598]: time="2025-05-14T05:11:20.335435724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:20.335969 containerd[1598]: time="2025-05-14T05:11:20.335943017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.934893099s" May 14 05:11:20.336039 containerd[1598]: time="2025-05-14T05:11:20.335971130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 05:11:20.341926 containerd[1598]: time="2025-05-14T05:11:20.341897547Z" level=info msg="CreateContainer within sandbox \"99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 05:11:20.344858 containerd[1598]: time="2025-05-14T05:11:20.344768067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 05:11:20.350471 containerd[1598]: time="2025-05-14T05:11:20.350438344Z" level=info msg="Container ed21e072bdbe87ca77e9b926005f087380caadb9ceec9c9c1ef30abdb4fc4d81: CDI devices from CRI Config.CDIDevices: []" May 14 05:11:20.358063 containerd[1598]: time="2025-05-14T05:11:20.358021543Z" level=info msg="CreateContainer within sandbox \"99968e30430c44dd11a2fa0b8ebbf9db8fa09f0de0502dde0bc3f84427b76ee6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ed21e072bdbe87ca77e9b926005f087380caadb9ceec9c9c1ef30abdb4fc4d81\"" May 14 05:11:20.358507 containerd[1598]: time="2025-05-14T05:11:20.358469143Z" level=info msg="StartContainer for \"ed21e072bdbe87ca77e9b926005f087380caadb9ceec9c9c1ef30abdb4fc4d81\"" May 14 05:11:20.359450 containerd[1598]: time="2025-05-14T05:11:20.359420490Z" level=info msg="connecting to shim ed21e072bdbe87ca77e9b926005f087380caadb9ceec9c9c1ef30abdb4fc4d81" address="unix:///run/containerd/s/8504b9a2b5a7163a28c18911a5fee63c60c9cb4cedb2e041e50fa8629a463baf" protocol=ttrpc version=3 May 14 05:11:20.383852 systemd[1]: Started cri-containerd-ed21e072bdbe87ca77e9b926005f087380caadb9ceec9c9c1ef30abdb4fc4d81.scope - libcontainer container ed21e072bdbe87ca77e9b926005f087380caadb9ceec9c9c1ef30abdb4fc4d81. May 14 05:11:20.432200 containerd[1598]: time="2025-05-14T05:11:20.432155340Z" level=info msg="StartContainer for \"ed21e072bdbe87ca77e9b926005f087380caadb9ceec9c9c1ef30abdb4fc4d81\" returns successfully" May 14 05:11:21.058907 kubelet[2831]: I0514 05:11:21.058798 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fmcx7" podStartSLOduration=40.058781463 podStartE2EDuration="40.058781463s" podCreationTimestamp="2025-05-14 05:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:11:21.058402912 +0000 UTC m=+55.676325095" watchObservedRunningTime="2025-05-14 05:11:21.058781463 +0000 UTC m=+55.676703656" May 14 05:11:21.280605 kubelet[2831]: I0514 05:11:21.280536 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f69d9dff-p9fm4" podStartSLOduration=27.783763033 podStartE2EDuration="32.28051733s" podCreationTimestamp="2025-05-14 05:10:49 +0000 UTC" firstStartedPulling="2025-05-14 05:11:15.839969826 +0000 UTC m=+50.457892019" lastFinishedPulling="2025-05-14 05:11:20.336724123 +0000 UTC m=+54.954646316" observedRunningTime="2025-05-14 05:11:21.240488144 +0000 UTC m=+55.858410417" watchObservedRunningTime="2025-05-14 05:11:21.28051733 +0000 UTC m=+55.898439523" May 14 05:11:21.600916 systemd-networkd[1497]: caliace73c5bf2c: Gained IPv6LL May 14 05:11:21.936327 kubelet[2831]: I0514 05:11:21.936286 2831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 05:11:22.760647 systemd[1]: Started sshd@14-10.0.0.84:22-10.0.0.1:45776.service - OpenSSH per-connection server daemon (10.0.0.1:45776). May 14 05:11:22.827227 sshd[5138]: Accepted publickey for core from 10.0.0.1 port 45776 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:22.829021 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:22.836175 systemd-logind[1573]: New session 15 of user core. May 14 05:11:22.843842 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 05:11:22.978144 sshd[5140]: Connection closed by 10.0.0.1 port 45776 May 14 05:11:22.978479 sshd-session[5138]: pam_unix(sshd:session): session closed for user core May 14 05:11:22.983643 systemd[1]: sshd@14-10.0.0.84:22-10.0.0.1:45776.service: Deactivated successfully. May 14 05:11:22.985640 systemd[1]: session-15.scope: Deactivated successfully. May 14 05:11:22.986515 systemd-logind[1573]: Session 15 logged out. Waiting for processes to exit. May 14 05:11:22.987828 systemd-logind[1573]: Removed session 15. May 14 05:11:23.744528 containerd[1598]: time="2025-05-14T05:11:23.744439694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:23.745684 containerd[1598]: time="2025-05-14T05:11:23.745647351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 14 05:11:23.747613 containerd[1598]: time="2025-05-14T05:11:23.747552999Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:23.749971 containerd[1598]: time="2025-05-14T05:11:23.749909563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:23.750497 containerd[1598]: time="2025-05-14T05:11:23.750445489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.405627588s" May 14 05:11:23.750560 containerd[1598]: time="2025-05-14T05:11:23.750502837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 14 05:11:23.751931 containerd[1598]: time="2025-05-14T05:11:23.751817174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 05:11:23.760940 containerd[1598]: time="2025-05-14T05:11:23.760891370Z" level=info msg="CreateContainer within sandbox \"9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 05:11:23.773052 containerd[1598]: time="2025-05-14T05:11:23.772995122Z" level=info msg="Container 515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea: CDI devices from CRI Config.CDIDevices: []" May 14 05:11:23.785988 containerd[1598]: time="2025-05-14T05:11:23.785934263Z" level=info msg="CreateContainer within sandbox \"9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\"" May 14 05:11:23.786759 containerd[1598]: time="2025-05-14T05:11:23.786734715Z" level=info msg="StartContainer for \"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\"" May 14 05:11:23.787881 containerd[1598]: time="2025-05-14T05:11:23.787849088Z" level=info msg="connecting to shim 515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea" address="unix:///run/containerd/s/ad48fbdca1bf854bb4525cfaedd9b812ae44fb08aab5c94c79854f43b1a1bf11" protocol=ttrpc version=3 May 14 05:11:23.841878 systemd[1]: Started cri-containerd-515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea.scope - libcontainer container 515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea. May 14 05:11:23.891477 containerd[1598]: time="2025-05-14T05:11:23.891437245Z" level=info msg="StartContainer for \"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\" returns successfully" May 14 05:11:23.968311 kubelet[2831]: I0514 05:11:23.968223 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79ccf86b7c-hxw7c" podStartSLOduration=28.922582129 podStartE2EDuration="35.96820399s" podCreationTimestamp="2025-05-14 05:10:48 +0000 UTC" firstStartedPulling="2025-05-14 05:11:16.705961053 +0000 UTC m=+51.323883257" lastFinishedPulling="2025-05-14 05:11:23.751582925 +0000 UTC m=+58.369505118" observedRunningTime="2025-05-14 05:11:23.964617818 +0000 UTC m=+58.582540011" watchObservedRunningTime="2025-05-14 05:11:23.96820399 +0000 UTC m=+58.586126183" May 14 05:11:24.004940 containerd[1598]: time="2025-05-14T05:11:24.004818628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\" id:\"d350ad3169823f7b1475a4ea2c9e4e60f1c28d517a7376d8f37a54ec0a261d00\" pid:5204 exited_at:{seconds:1747199484 nanos:4421031}" May 14 05:11:24.118024 containerd[1598]: time="2025-05-14T05:11:24.117957544Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:24.118763 containerd[1598]: time="2025-05-14T05:11:24.118678186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 05:11:24.120770 containerd[1598]: time="2025-05-14T05:11:24.120695303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 368.843995ms" May 14 05:11:24.120832 containerd[1598]: time="2025-05-14T05:11:24.120770053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 05:11:24.121800 containerd[1598]: time="2025-05-14T05:11:24.121756756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 05:11:24.123362 containerd[1598]: time="2025-05-14T05:11:24.123324408Z" level=info msg="CreateContainer within sandbox \"ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 05:11:24.132042 containerd[1598]: time="2025-05-14T05:11:24.131997349Z" level=info msg="Container 45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e: CDI devices from CRI Config.CDIDevices: []" May 14 05:11:24.140209 containerd[1598]: time="2025-05-14T05:11:24.140158399Z" level=info msg="CreateContainer within sandbox \"ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\"" May 14 05:11:24.140776 containerd[1598]: time="2025-05-14T05:11:24.140747375Z" level=info msg="StartContainer for \"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\"" May 14 05:11:24.142073 containerd[1598]: time="2025-05-14T05:11:24.142032968Z" level=info msg="connecting to shim 45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e" address="unix:///run/containerd/s/732d1b199a1ab1bc4f9f3ed974e6fefc52cfe94e29fbb30e505b6bff77c2221b" protocol=ttrpc version=3 May 14 05:11:24.170995 systemd[1]: Started cri-containerd-45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e.scope - libcontainer container 45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e. May 14 05:11:24.318652 containerd[1598]: time="2025-05-14T05:11:24.318140831Z" level=info msg="StartContainer for \"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\" returns successfully" May 14 05:11:25.126011 kubelet[2831]: I0514 05:11:25.125868 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-845d69865c-92j47" podStartSLOduration=30.732534031 podStartE2EDuration="38.125562773s" podCreationTimestamp="2025-05-14 05:10:47 +0000 UTC" firstStartedPulling="2025-05-14 05:11:16.728499064 +0000 UTC m=+51.346421257" lastFinishedPulling="2025-05-14 05:11:24.121527765 +0000 UTC m=+58.739449999" observedRunningTime="2025-05-14 05:11:25.124667543 +0000 UTC m=+59.742589736" watchObservedRunningTime="2025-05-14 05:11:25.125562773 +0000 UTC m=+59.743484966" May 14 05:11:25.962348 kubelet[2831]: I0514 05:11:25.962303 2831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 05:11:26.777540 containerd[1598]: time="2025-05-14T05:11:26.777486055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:26.778280 containerd[1598]: time="2025-05-14T05:11:26.778251452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 14 05:11:26.779365 containerd[1598]: time="2025-05-14T05:11:26.779332702Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:26.781221 containerd[1598]: time="2025-05-14T05:11:26.781184497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:26.781786 containerd[1598]: time="2025-05-14T05:11:26.781729981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.659926818s" May 14 05:11:26.781840 containerd[1598]: time="2025-05-14T05:11:26.781784944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 14 05:11:26.782597 containerd[1598]: time="2025-05-14T05:11:26.782574927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 05:11:26.785213 containerd[1598]: time="2025-05-14T05:11:26.785192311Z" level=info msg="CreateContainer within sandbox \"c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 05:11:26.803867 containerd[1598]: time="2025-05-14T05:11:26.803820917Z" level=info msg="Container d1b6f9cc664cbc96689cc39a49af970e2c0fba95f0fe7fd8b3c1ce5479e0dc20: CDI devices from CRI Config.CDIDevices: []" May 14 05:11:26.814989 containerd[1598]: time="2025-05-14T05:11:26.814958292Z" level=info msg="CreateContainer within sandbox \"c1da211b717ead495e9e4f9b7e68ded36ef25eb3869d7affcacee9cfbe99aa9f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d1b6f9cc664cbc96689cc39a49af970e2c0fba95f0fe7fd8b3c1ce5479e0dc20\"" May 14 05:11:26.815674 containerd[1598]: time="2025-05-14T05:11:26.815651634Z" level=info msg="StartContainer for \"d1b6f9cc664cbc96689cc39a49af970e2c0fba95f0fe7fd8b3c1ce5479e0dc20\"" May 14 05:11:26.817146 containerd[1598]: time="2025-05-14T05:11:26.817123326Z" level=info msg="connecting to shim d1b6f9cc664cbc96689cc39a49af970e2c0fba95f0fe7fd8b3c1ce5479e0dc20" address="unix:///run/containerd/s/4ef32210e607513e747dfdd18e713e90cdf76caa8c33ba1d36b9e2fce2061771" protocol=ttrpc version=3 May 14 05:11:26.857823 systemd[1]: Started cri-containerd-d1b6f9cc664cbc96689cc39a49af970e2c0fba95f0fe7fd8b3c1ce5479e0dc20.scope - libcontainer container d1b6f9cc664cbc96689cc39a49af970e2c0fba95f0fe7fd8b3c1ce5479e0dc20. May 14 05:11:26.903370 containerd[1598]: time="2025-05-14T05:11:26.903321563Z" level=info msg="StartContainer for \"d1b6f9cc664cbc96689cc39a49af970e2c0fba95f0fe7fd8b3c1ce5479e0dc20\" returns successfully" May 14 05:11:27.369575 containerd[1598]: time="2025-05-14T05:11:27.369517388Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:11:27.370280 containerd[1598]: time="2025-05-14T05:11:27.370237590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 05:11:27.372082 containerd[1598]: time="2025-05-14T05:11:27.372038049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 589.438776ms" May 14 05:11:27.372082 containerd[1598]: time="2025-05-14T05:11:27.372070981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 05:11:27.374331 containerd[1598]: time="2025-05-14T05:11:27.373983231Z" level=info msg="CreateContainer within sandbox \"41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 05:11:27.381328 containerd[1598]: time="2025-05-14T05:11:27.381273815Z" level=info msg="Container 9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40: CDI devices from CRI Config.CDIDevices: []" May 14 05:11:27.390869 containerd[1598]: time="2025-05-14T05:11:27.390822578Z" level=info msg="CreateContainer within sandbox \"41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40\"" May 14 05:11:27.392575 containerd[1598]: time="2025-05-14T05:11:27.391357501Z" level=info msg="StartContainer for \"9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40\"" May 14 05:11:27.392841 containerd[1598]: time="2025-05-14T05:11:27.392788367Z" level=info msg="connecting to shim 9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40" address="unix:///run/containerd/s/926017d9dab2e561e55f037e09e1be78bf8eae72d6126e0e8e315ab9d6a453f9" protocol=ttrpc version=3 May 14 05:11:27.416983 systemd[1]: Started cri-containerd-9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40.scope - libcontainer container 9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40. May 14 05:11:27.472622 containerd[1598]: time="2025-05-14T05:11:27.472556518Z" level=info msg="StartContainer for \"9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40\" returns successfully" May 14 05:11:27.532179 kubelet[2831]: I0514 05:11:27.532134 2831 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 05:11:27.532179 kubelet[2831]: I0514 05:11:27.532183 2831 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 05:11:27.995123 systemd[1]: Started sshd@15-10.0.0.84:22-10.0.0.1:42704.service - OpenSSH per-connection server daemon (10.0.0.1:42704). May 14 05:11:28.013729 kubelet[2831]: I0514 05:11:28.013630 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7w57m" podStartSLOduration=30.058228146 podStartE2EDuration="41.013608484s" podCreationTimestamp="2025-05-14 05:10:47 +0000 UTC" firstStartedPulling="2025-05-14 05:11:15.827081417 +0000 UTC m=+50.445003610" lastFinishedPulling="2025-05-14 05:11:26.782461735 +0000 UTC m=+61.400383948" observedRunningTime="2025-05-14 05:11:26.97942952 +0000 UTC m=+61.597351713" watchObservedRunningTime="2025-05-14 05:11:28.013608484 +0000 UTC m=+62.631530678" May 14 05:11:28.058114 sshd[5335]: Accepted publickey for core from 10.0.0.1 port 42704 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:28.060220 sshd-session[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:28.065207 systemd-logind[1573]: New session 16 of user core. May 14 05:11:28.070858 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 05:11:28.254248 sshd[5339]: Connection closed by 10.0.0.1 port 42704 May 14 05:11:28.254474 sshd-session[5335]: pam_unix(sshd:session): session closed for user core May 14 05:11:28.257267 systemd[1]: sshd@15-10.0.0.84:22-10.0.0.1:42704.service: Deactivated successfully. May 14 05:11:28.259213 systemd[1]: session-16.scope: Deactivated successfully. May 14 05:11:28.260536 systemd-logind[1573]: Session 16 logged out. Waiting for processes to exit. May 14 05:11:28.261641 systemd-logind[1573]: Removed session 16. May 14 05:11:28.973122 kubelet[2831]: I0514 05:11:28.973089 2831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 05:11:33.270922 systemd[1]: Started sshd@16-10.0.0.84:22-10.0.0.1:42720.service - OpenSSH per-connection server daemon (10.0.0.1:42720). May 14 05:11:33.331753 sshd[5354]: Accepted publickey for core from 10.0.0.1 port 42720 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:33.333438 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:33.338119 systemd-logind[1573]: New session 17 of user core. May 14 05:11:33.347874 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 05:11:33.482951 sshd[5356]: Connection closed by 10.0.0.1 port 42720 May 14 05:11:33.483330 sshd-session[5354]: pam_unix(sshd:session): session closed for user core May 14 05:11:33.487675 systemd[1]: sshd@16-10.0.0.84:22-10.0.0.1:42720.service: Deactivated successfully. May 14 05:11:33.490238 systemd[1]: session-17.scope: Deactivated successfully. May 14 05:11:33.491106 systemd-logind[1573]: Session 17 logged out. Waiting for processes to exit. May 14 05:11:33.492748 systemd-logind[1573]: Removed session 17. May 14 05:11:33.929603 containerd[1598]: time="2025-05-14T05:11:33.929507344Z" level=info msg="TaskExit event in podsandbox handler container_id:\"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\" id:\"52778606d6cfb1afa1a92974c18a9364da76525ab45e191f5888a7b45d3d1aea\" pid:5380 exited_at:{seconds:1747199493 nanos:929201508}" May 14 05:11:36.038054 containerd[1598]: time="2025-05-14T05:11:36.037991552Z" level=info msg="TaskExit event in podsandbox handler container_id:\"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\" id:\"c7b9424def51fab80464c121aade6e3fe24729d74cdf4ea36a55577b0215e019\" pid:5411 exited_at:{seconds:1747199496 nanos:37816934}" May 14 05:11:38.495585 systemd[1]: Started sshd@17-10.0.0.84:22-10.0.0.1:48308.service - OpenSSH per-connection server daemon (10.0.0.1:48308). May 14 05:11:38.550175 sshd[5424]: Accepted publickey for core from 10.0.0.1 port 48308 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:38.551832 sshd-session[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:38.557618 systemd-logind[1573]: New session 18 of user core. May 14 05:11:38.560844 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 05:11:38.676538 sshd[5426]: Connection closed by 10.0.0.1 port 48308 May 14 05:11:38.676976 sshd-session[5424]: pam_unix(sshd:session): session closed for user core May 14 05:11:38.687491 systemd[1]: sshd@17-10.0.0.84:22-10.0.0.1:48308.service: Deactivated successfully. May 14 05:11:38.689520 systemd[1]: session-18.scope: Deactivated successfully. May 14 05:11:38.690752 systemd-logind[1573]: Session 18 logged out. Waiting for processes to exit. May 14 05:11:38.693830 systemd[1]: Started sshd@18-10.0.0.84:22-10.0.0.1:48312.service - OpenSSH per-connection server daemon (10.0.0.1:48312). May 14 05:11:38.694767 systemd-logind[1573]: Removed session 18. May 14 05:11:38.740728 sshd[5439]: Accepted publickey for core from 10.0.0.1 port 48312 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:38.748067 sshd-session[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:38.766783 systemd-logind[1573]: New session 19 of user core. May 14 05:11:38.774960 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 05:11:38.860839 kubelet[2831]: I0514 05:11:38.860765 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-845d69865c-7txs2" podStartSLOduration=42.353562584 podStartE2EDuration="51.860746518s" podCreationTimestamp="2025-05-14 05:10:47 +0000 UTC" firstStartedPulling="2025-05-14 05:11:17.86546401 +0000 UTC m=+52.483386203" lastFinishedPulling="2025-05-14 05:11:27.372647944 +0000 UTC m=+61.990570137" observedRunningTime="2025-05-14 05:11:28.014017533 +0000 UTC m=+62.631939726" watchObservedRunningTime="2025-05-14 05:11:38.860746518 +0000 UTC m=+73.478668721" May 14 05:11:38.900240 containerd[1598]: time="2025-05-14T05:11:38.900108396Z" level=info msg="StopContainer for \"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\" with timeout 300 (s)" May 14 05:11:38.905317 containerd[1598]: time="2025-05-14T05:11:38.905279653Z" level=info msg="Stop container \"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\" with signal terminated" May 14 05:11:38.966203 containerd[1598]: time="2025-05-14T05:11:38.966142814Z" level=info msg="StopContainer for \"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\" with timeout 30 (s)" May 14 05:11:38.968001 containerd[1598]: time="2025-05-14T05:11:38.967964255Z" level=info msg="Stop container \"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\" with signal terminated" May 14 05:11:38.984260 systemd[1]: cri-containerd-515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea.scope: Deactivated successfully. May 14 05:11:38.986987 containerd[1598]: time="2025-05-14T05:11:38.986321159Z" level=info msg="received exit event container_id:\"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\" id:\"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\" pid:5170 exit_status:2 exited_at:{seconds:1747199498 nanos:986057199}" May 14 05:11:38.987445 containerd[1598]: time="2025-05-14T05:11:38.987398963Z" level=info msg="TaskExit event in podsandbox handler container_id:\"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\" id:\"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\" pid:5170 exit_status:2 exited_at:{seconds:1747199498 nanos:986057199}" May 14 05:11:39.030071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea-rootfs.mount: Deactivated successfully. May 14 05:11:39.212674 containerd[1598]: time="2025-05-14T05:11:39.212629111Z" level=info msg="StopContainer for \"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\" returns successfully" May 14 05:11:39.215135 containerd[1598]: time="2025-05-14T05:11:39.215058543Z" level=info msg="StopPodSandbox for \"9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259\"" May 14 05:11:39.221591 sshd[5441]: Connection closed by 10.0.0.1 port 48312 May 14 05:11:39.222579 sshd-session[5439]: pam_unix(sshd:session): session closed for user core May 14 05:11:39.231884 systemd[1]: sshd@18-10.0.0.84:22-10.0.0.1:48312.service: Deactivated successfully. May 14 05:11:39.233347 containerd[1598]: time="2025-05-14T05:11:39.233295236Z" level=info msg="Container to stop \"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:11:39.234751 systemd[1]: session-19.scope: Deactivated successfully. May 14 05:11:39.235934 systemd-logind[1573]: Session 19 logged out. Waiting for processes to exit. May 14 05:11:39.240163 systemd[1]: Started sshd@19-10.0.0.84:22-10.0.0.1:48318.service - OpenSSH per-connection server daemon (10.0.0.1:48318). May 14 05:11:39.243259 systemd[1]: cri-containerd-9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259.scope: Deactivated successfully. May 14 05:11:39.245726 containerd[1598]: time="2025-05-14T05:11:39.245380079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259\" id:\"9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259\" pid:4637 exit_status:137 exited_at:{seconds:1747199499 nanos:243272227}" May 14 05:11:39.245495 systemd-logind[1573]: Removed session 19. May 14 05:11:39.276949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259-rootfs.mount: Deactivated successfully. May 14 05:11:39.278361 containerd[1598]: time="2025-05-14T05:11:39.278306896Z" level=info msg="shim disconnected" id=9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259 namespace=k8s.io May 14 05:11:39.278683 containerd[1598]: time="2025-05-14T05:11:39.278340491Z" level=warning msg="cleaning up after shim disconnected" id=9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259 namespace=k8s.io May 14 05:11:39.278683 containerd[1598]: time="2025-05-14T05:11:39.278621744Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 05:11:39.296782 sshd[5488]: Accepted publickey for core from 10.0.0.1 port 48318 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:39.298313 sshd-session[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:39.304451 systemd-logind[1573]: New session 20 of user core. May 14 05:11:39.309841 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 05:11:39.487440 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259-shm.mount: Deactivated successfully. May 14 05:11:39.496228 containerd[1598]: time="2025-05-14T05:11:39.496180003Z" level=info msg="received exit event sandbox_id:\"9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259\" exit_status:137 exited_at:{seconds:1747199499 nanos:243272227}" May 14 05:11:39.588604 systemd-networkd[1497]: cali8fa37478f27: Link DOWN May 14 05:11:39.588614 systemd-networkd[1497]: cali8fa37478f27: Lost carrier May 14 05:11:39.791335 containerd[1598]: 2025-05-14 05:11:39.584 [INFO][5547] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" May 14 05:11:39.791335 containerd[1598]: 2025-05-14 05:11:39.585 [INFO][5547] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" iface="eth0" netns="/var/run/netns/cni-2cd514ed-dab0-4ab9-6865-47e5ef83fdac" May 14 05:11:39.791335 containerd[1598]: 2025-05-14 05:11:39.585 [INFO][5547] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" iface="eth0" netns="/var/run/netns/cni-2cd514ed-dab0-4ab9-6865-47e5ef83fdac" May 14 05:11:39.791335 containerd[1598]: 2025-05-14 05:11:39.593 [INFO][5547] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" after=8.44951ms iface="eth0" netns="/var/run/netns/cni-2cd514ed-dab0-4ab9-6865-47e5ef83fdac" May 14 05:11:39.791335 containerd[1598]: 2025-05-14 05:11:39.594 [INFO][5547] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" May 14 05:11:39.791335 containerd[1598]: 2025-05-14 05:11:39.594 [INFO][5547] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" May 14 05:11:39.791335 containerd[1598]: 2025-05-14 05:11:39.625 [INFO][5563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" HandleID="k8s-pod-network.9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" Workload="localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0" May 14 05:11:39.791335 containerd[1598]: 2025-05-14 05:11:39.625 [INFO][5563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 05:11:39.791335 containerd[1598]: 2025-05-14 05:11:39.625 [INFO][5563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 05:11:39.791335 containerd[1598]: 2025-05-14 05:11:39.783 [INFO][5563] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" HandleID="k8s-pod-network.9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" Workload="localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0" May 14 05:11:39.791335 containerd[1598]: 2025-05-14 05:11:39.783 [INFO][5563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" HandleID="k8s-pod-network.9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" Workload="localhost-k8s-calico--kube--controllers--79ccf86b7c--hxw7c-eth0" May 14 05:11:39.791335 containerd[1598]: 2025-05-14 05:11:39.785 [INFO][5563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 05:11:39.791335 containerd[1598]: 2025-05-14 05:11:39.788 [INFO][5547] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259" May 14 05:11:39.795134 systemd[1]: run-netns-cni\x2d2cd514ed\x2ddab0\x2d4ab9\x2d6865\x2d47e5ef83fdac.mount: Deactivated successfully. May 14 05:11:39.801042 containerd[1598]: time="2025-05-14T05:11:39.800987302Z" level=info msg="TearDown network for sandbox \"9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259\" successfully" May 14 05:11:39.801042 containerd[1598]: time="2025-05-14T05:11:39.801024353Z" level=info msg="StopPodSandbox for \"9e905b0d8d794b09e617e4505ba08225cbde16d1dfcd79e489bb402575f2b259\" returns successfully" May 14 05:11:39.939454 kubelet[2831]: I0514 05:11:39.939398 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36ac7d47-9f28-487c-9914-f18cfeea4ed5-tigera-ca-bundle\") pod \"36ac7d47-9f28-487c-9914-f18cfeea4ed5\" (UID: \"36ac7d47-9f28-487c-9914-f18cfeea4ed5\") " May 14 05:11:39.939454 kubelet[2831]: I0514 05:11:39.939451 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dngdg\" (UniqueName: \"kubernetes.io/projected/36ac7d47-9f28-487c-9914-f18cfeea4ed5-kube-api-access-dngdg\") pod \"36ac7d47-9f28-487c-9914-f18cfeea4ed5\" (UID: \"36ac7d47-9f28-487c-9914-f18cfeea4ed5\") " May 14 05:11:39.945295 systemd[1]: var-lib-kubelet-pods-36ac7d47\x2d9f28\x2d487c\x2d9914\x2df18cfeea4ed5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddngdg.mount: Deactivated successfully. May 14 05:11:39.947617 kubelet[2831]: I0514 05:11:39.947227 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36ac7d47-9f28-487c-9914-f18cfeea4ed5-kube-api-access-dngdg" (OuterVolumeSpecName: "kube-api-access-dngdg") pod "36ac7d47-9f28-487c-9914-f18cfeea4ed5" (UID: "36ac7d47-9f28-487c-9914-f18cfeea4ed5"). InnerVolumeSpecName "kube-api-access-dngdg". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 05:11:39.948450 kubelet[2831]: I0514 05:11:39.948396 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36ac7d47-9f28-487c-9914-f18cfeea4ed5-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "36ac7d47-9f28-487c-9914-f18cfeea4ed5" (UID: "36ac7d47-9f28-487c-9914-f18cfeea4ed5"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 05:11:39.994085 kubelet[2831]: I0514 05:11:39.993986 2831 scope.go:117] "RemoveContainer" containerID="515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea" May 14 05:11:39.995857 containerd[1598]: time="2025-05-14T05:11:39.995826589Z" level=info msg="RemoveContainer for \"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\"" May 14 05:11:39.999546 systemd[1]: Removed slice kubepods-besteffort-pod36ac7d47_9f28_487c_9914_f18cfeea4ed5.slice - libcontainer container kubepods-besteffort-pod36ac7d47_9f28_487c_9914_f18cfeea4ed5.slice. May 14 05:11:40.026985 systemd[1]: var-lib-kubelet-pods-36ac7d47\x2d9f28\x2d487c\x2d9914\x2df18cfeea4ed5-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. May 14 05:11:40.030826 containerd[1598]: time="2025-05-14T05:11:40.030675523Z" level=info msg="RemoveContainer for \"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\" returns successfully" May 14 05:11:40.039211 kubelet[2831]: I0514 05:11:40.039185 2831 scope.go:117] "RemoveContainer" containerID="515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea" May 14 05:11:40.039696 kubelet[2831]: I0514 05:11:40.039568 2831 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dngdg\" (UniqueName: \"kubernetes.io/projected/36ac7d47-9f28-487c-9914-f18cfeea4ed5-kube-api-access-dngdg\") on node \"localhost\" DevicePath \"\"" May 14 05:11:40.040261 kubelet[2831]: I0514 05:11:40.040250 2831 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36ac7d47-9f28-487c-9914-f18cfeea4ed5-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 14 05:11:40.040580 containerd[1598]: time="2025-05-14T05:11:40.040535443Z" level=error msg="ContainerStatus for \"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\": not found" May 14 05:11:40.040830 kubelet[2831]: E0514 05:11:40.040813 2831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\": not found" containerID="515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea" May 14 05:11:40.042728 kubelet[2831]: I0514 05:11:40.040915 2831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea"} err="failed to get container status \"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\": rpc error: code = NotFound desc = an error occurred when try to find container \"515eb9e699dd9da566803b8c0b66f30e0afe4928e97d07838c29fabe69d3ddea\": not found" May 14 05:11:40.308110 kubelet[2831]: I0514 05:11:40.306503 2831 topology_manager.go:215] "Topology Admit Handler" podUID="b641dca0-e20a-47ae-bd13-0fcbe14bea59" podNamespace="calico-system" podName="calico-kube-controllers-9c8868876-cltdn" May 14 05:11:40.308110 kubelet[2831]: E0514 05:11:40.306592 2831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="36ac7d47-9f28-487c-9914-f18cfeea4ed5" containerName="calico-kube-controllers" May 14 05:11:40.308110 kubelet[2831]: I0514 05:11:40.306829 2831 memory_manager.go:354] "RemoveStaleState removing state" podUID="36ac7d47-9f28-487c-9914-f18cfeea4ed5" containerName="calico-kube-controllers" May 14 05:11:40.313153 systemd[1]: Created slice kubepods-besteffort-podb641dca0_e20a_47ae_bd13_0fcbe14bea59.slice - libcontainer container kubepods-besteffort-podb641dca0_e20a_47ae_bd13_0fcbe14bea59.slice. May 14 05:11:40.346382 kubelet[2831]: I0514 05:11:40.346323 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b641dca0-e20a-47ae-bd13-0fcbe14bea59-tigera-ca-bundle\") pod \"calico-kube-controllers-9c8868876-cltdn\" (UID: \"b641dca0-e20a-47ae-bd13-0fcbe14bea59\") " pod="calico-system/calico-kube-controllers-9c8868876-cltdn" May 14 05:11:40.347180 kubelet[2831]: I0514 05:11:40.347076 2831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdmvq\" (UniqueName: \"kubernetes.io/projected/b641dca0-e20a-47ae-bd13-0fcbe14bea59-kube-api-access-jdmvq\") pod \"calico-kube-controllers-9c8868876-cltdn\" (UID: \"b641dca0-e20a-47ae-bd13-0fcbe14bea59\") " pod="calico-system/calico-kube-controllers-9c8868876-cltdn" May 14 05:11:40.617485 containerd[1598]: time="2025-05-14T05:11:40.617406340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9c8868876-cltdn,Uid:b641dca0-e20a-47ae-bd13-0fcbe14bea59,Namespace:calico-system,Attempt:0,}" May 14 05:11:40.781075 systemd-networkd[1497]: cali24b5943cb94: Link UP May 14 05:11:40.781318 systemd-networkd[1497]: cali24b5943cb94: Gained carrier May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.661 [INFO][5585] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--9c8868876--cltdn-eth0 calico-kube-controllers-9c8868876- calico-system b641dca0-e20a-47ae-bd13-0fcbe14bea59 1118 0 2025-05-14 05:11:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9c8868876 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-9c8868876-cltdn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali24b5943cb94 [] []}} ContainerID="eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" Namespace="calico-system" Pod="calico-kube-controllers-9c8868876-cltdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c8868876--cltdn-" May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.662 [INFO][5585] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" Namespace="calico-system" Pod="calico-kube-controllers-9c8868876-cltdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c8868876--cltdn-eth0" May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.700 [INFO][5601] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" HandleID="k8s-pod-network.eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" Workload="localhost-k8s-calico--kube--controllers--9c8868876--cltdn-eth0" May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.712 [INFO][5601] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" HandleID="k8s-pod-network.eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" Workload="localhost-k8s-calico--kube--controllers--9c8868876--cltdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f1f60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-9c8868876-cltdn", "timestamp":"2025-05-14 05:11:40.700382961 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.712 [INFO][5601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.713 [INFO][5601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.713 [INFO][5601] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.714 [INFO][5601] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" host="localhost" May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.722 [INFO][5601] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.733 [INFO][5601] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.736 [INFO][5601] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.739 [INFO][5601] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.739 [INFO][5601] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" host="localhost" May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.741 [INFO][5601] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5 May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.754 [INFO][5601] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" host="localhost" May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.760 [INFO][5601] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" host="localhost" May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.761 [INFO][5601] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" host="localhost" May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.761 [INFO][5601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 05:11:40.797588 containerd[1598]: 2025-05-14 05:11:40.761 [INFO][5601] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" HandleID="k8s-pod-network.eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" Workload="localhost-k8s-calico--kube--controllers--9c8868876--cltdn-eth0" May 14 05:11:40.798416 containerd[1598]: 2025-05-14 05:11:40.776 [INFO][5585] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" Namespace="calico-system" Pod="calico-kube-controllers-9c8868876-cltdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c8868876--cltdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9c8868876--cltdn-eth0", GenerateName:"calico-kube-controllers-9c8868876-", Namespace:"calico-system", SelfLink:"", UID:"b641dca0-e20a-47ae-bd13-0fcbe14bea59", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 11, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9c8868876", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-9c8868876-cltdn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali24b5943cb94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:40.798416 containerd[1598]: 2025-05-14 05:11:40.776 [INFO][5585] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.136/32] ContainerID="eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" Namespace="calico-system" Pod="calico-kube-controllers-9c8868876-cltdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c8868876--cltdn-eth0" May 14 05:11:40.798416 containerd[1598]: 2025-05-14 05:11:40.776 [INFO][5585] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24b5943cb94 ContainerID="eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" Namespace="calico-system" Pod="calico-kube-controllers-9c8868876-cltdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c8868876--cltdn-eth0" May 14 05:11:40.798416 containerd[1598]: 2025-05-14 05:11:40.783 [INFO][5585] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" Namespace="calico-system" Pod="calico-kube-controllers-9c8868876-cltdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c8868876--cltdn-eth0" May 14 05:11:40.798416 containerd[1598]: 2025-05-14 05:11:40.784 [INFO][5585] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" Namespace="calico-system" Pod="calico-kube-controllers-9c8868876-cltdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c8868876--cltdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9c8868876--cltdn-eth0", GenerateName:"calico-kube-controllers-9c8868876-", Namespace:"calico-system", SelfLink:"", UID:"b641dca0-e20a-47ae-bd13-0fcbe14bea59", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 5, 11, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9c8868876", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5", Pod:"calico-kube-controllers-9c8868876-cltdn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali24b5943cb94", MAC:"ae:1f:c3:52:09:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 05:11:40.798416 containerd[1598]: 2025-05-14 05:11:40.793 [INFO][5585] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" Namespace="calico-system" Pod="calico-kube-controllers-9c8868876-cltdn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9c8868876--cltdn-eth0" May 14 05:11:40.845924 containerd[1598]: time="2025-05-14T05:11:40.845631314Z" level=info msg="connecting to shim eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5" address="unix:///run/containerd/s/5d6c3630f2e4f94d27d1baeb4846aadd3d3d15cb0a8880052d98f2127fb6f636" namespace=k8s.io protocol=ttrpc version=3 May 14 05:11:40.863392 kubelet[2831]: I0514 05:11:40.863349 2831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 05:11:40.881022 systemd[1]: Started cri-containerd-eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5.scope - libcontainer container eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5. May 14 05:11:40.917828 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 05:11:40.960829 containerd[1598]: time="2025-05-14T05:11:40.959533671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9c8868876-cltdn,Uid:b641dca0-e20a-47ae-bd13-0fcbe14bea59,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5\"" May 14 05:11:40.978731 containerd[1598]: time="2025-05-14T05:11:40.977349805Z" level=info msg="CreateContainer within sandbox \"eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 05:11:40.990646 containerd[1598]: time="2025-05-14T05:11:40.990228691Z" level=info msg="Container 05dc1634232f2cffadcb9ec20a3d263385e04854006c3d719158b18b2ed16a69: CDI devices from CRI Config.CDIDevices: []" May 14 05:11:41.000257 containerd[1598]: time="2025-05-14T05:11:41.000185437Z" level=info msg="CreateContainer within sandbox \"eb2c57d320ed971bc097f2cb821d9a8af89825dbcf1501efb1be66fbab200bb5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"05dc1634232f2cffadcb9ec20a3d263385e04854006c3d719158b18b2ed16a69\"" May 14 05:11:41.001720 containerd[1598]: time="2025-05-14T05:11:41.001170798Z" level=info msg="StartContainer for \"05dc1634232f2cffadcb9ec20a3d263385e04854006c3d719158b18b2ed16a69\"" May 14 05:11:41.002376 containerd[1598]: time="2025-05-14T05:11:41.002358288Z" level=info msg="connecting to shim 05dc1634232f2cffadcb9ec20a3d263385e04854006c3d719158b18b2ed16a69" address="unix:///run/containerd/s/5d6c3630f2e4f94d27d1baeb4846aadd3d3d15cb0a8880052d98f2127fb6f636" protocol=ttrpc version=3 May 14 05:11:41.026081 systemd[1]: Started cri-containerd-05dc1634232f2cffadcb9ec20a3d263385e04854006c3d719158b18b2ed16a69.scope - libcontainer container 05dc1634232f2cffadcb9ec20a3d263385e04854006c3d719158b18b2ed16a69. May 14 05:11:41.095598 containerd[1598]: time="2025-05-14T05:11:41.095557692Z" level=info msg="StartContainer for \"05dc1634232f2cffadcb9ec20a3d263385e04854006c3d719158b18b2ed16a69\" returns successfully" May 14 05:11:41.229890 sshd[5524]: Connection closed by 10.0.0.1 port 48318 May 14 05:11:41.233418 sshd-session[5488]: pam_unix(sshd:session): session closed for user core May 14 05:11:41.243299 systemd[1]: sshd@19-10.0.0.84:22-10.0.0.1:48318.service: Deactivated successfully. May 14 05:11:41.246745 systemd[1]: session-20.scope: Deactivated successfully. May 14 05:11:41.247335 systemd[1]: session-20.scope: Consumed 565ms CPU time, 69.6M memory peak. May 14 05:11:41.249226 systemd-logind[1573]: Session 20 logged out. Waiting for processes to exit. May 14 05:11:41.257035 systemd[1]: Started sshd@20-10.0.0.84:22-10.0.0.1:48324.service - OpenSSH per-connection server daemon (10.0.0.1:48324). May 14 05:11:41.258614 systemd-logind[1573]: Removed session 20. May 14 05:11:41.307746 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 48324 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:41.309255 sshd-session[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:41.313567 systemd-logind[1573]: New session 21 of user core. May 14 05:11:41.318832 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 05:11:41.465374 kubelet[2831]: I0514 05:11:41.465327 2831 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36ac7d47-9f28-487c-9914-f18cfeea4ed5" path="/var/lib/kubelet/pods/36ac7d47-9f28-487c-9914-f18cfeea4ed5/volumes" May 14 05:11:41.924158 sshd[5731]: Connection closed by 10.0.0.1 port 48324 May 14 05:11:41.925061 sshd-session[5729]: pam_unix(sshd:session): session closed for user core May 14 05:11:41.941580 systemd[1]: sshd@20-10.0.0.84:22-10.0.0.1:48324.service: Deactivated successfully. May 14 05:11:41.944228 systemd[1]: session-21.scope: Deactivated successfully. May 14 05:11:41.946344 systemd-logind[1573]: Session 21 logged out. Waiting for processes to exit. May 14 05:11:41.949102 systemd-logind[1573]: Removed session 21. May 14 05:11:41.953008 systemd[1]: Started sshd@21-10.0.0.84:22-10.0.0.1:48328.service - OpenSSH per-connection server daemon (10.0.0.1:48328). May 14 05:11:42.003915 sshd[5747]: Accepted publickey for core from 10.0.0.1 port 48328 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:42.006428 sshd-session[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:42.015266 systemd-logind[1573]: New session 22 of user core. May 14 05:11:42.018973 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 05:11:42.051984 kubelet[2831]: I0514 05:11:42.051907 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9c8868876-cltdn" podStartSLOduration=2.051889272 podStartE2EDuration="2.051889272s" podCreationTimestamp="2025-05-14 05:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:11:42.051385881 +0000 UTC m=+76.669308064" watchObservedRunningTime="2025-05-14 05:11:42.051889272 +0000 UTC m=+76.669811455" May 14 05:11:42.095501 containerd[1598]: time="2025-05-14T05:11:42.093885230Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05dc1634232f2cffadcb9ec20a3d263385e04854006c3d719158b18b2ed16a69\" id:\"e58ae8213a63e9ddd31890d7ae93d5593246c818b53528de7fa8e4c8a6e288e1\" pid:5772 exited_at:{seconds:1747199502 nanos:93582728}" May 14 05:11:42.167459 sshd[5759]: Connection closed by 10.0.0.1 port 48328 May 14 05:11:42.167922 sshd-session[5747]: pam_unix(sshd:session): session closed for user core May 14 05:11:42.172590 systemd[1]: sshd@21-10.0.0.84:22-10.0.0.1:48328.service: Deactivated successfully. May 14 05:11:42.175015 systemd[1]: session-22.scope: Deactivated successfully. May 14 05:11:42.176952 systemd-logind[1573]: Session 22 logged out. Waiting for processes to exit. May 14 05:11:42.178736 systemd-logind[1573]: Removed session 22. May 14 05:11:42.784899 systemd-networkd[1497]: cali24b5943cb94: Gained IPv6LL May 14 05:11:43.326188 systemd[1]: cri-containerd-6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab.scope: Deactivated successfully. May 14 05:11:43.326634 systemd[1]: cri-containerd-6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab.scope: Consumed 322ms CPU time, 30.1M memory peak, 7.1M read from disk. May 14 05:11:43.328062 containerd[1598]: time="2025-05-14T05:11:43.328015931Z" level=info msg="received exit event container_id:\"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\" id:\"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\" pid:3431 exit_status:1 exited_at:{seconds:1747199503 nanos:327634026}" May 14 05:11:43.328656 containerd[1598]: time="2025-05-14T05:11:43.328623651Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\" id:\"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\" pid:3431 exit_status:1 exited_at:{seconds:1747199503 nanos:327634026}" May 14 05:11:43.350986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab-rootfs.mount: Deactivated successfully. May 14 05:11:43.520055 containerd[1598]: time="2025-05-14T05:11:43.520003988Z" level=info msg="StopContainer for \"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\" returns successfully" May 14 05:11:43.520800 containerd[1598]: time="2025-05-14T05:11:43.520757349Z" level=info msg="StopPodSandbox for \"8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2\"" May 14 05:11:43.521196 containerd[1598]: time="2025-05-14T05:11:43.520822234Z" level=info msg="Container to stop \"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:11:43.534747 systemd[1]: cri-containerd-8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2.scope: Deactivated successfully. May 14 05:11:43.538557 containerd[1598]: time="2025-05-14T05:11:43.538441162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2\" id:\"8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2\" pid:3355 exit_status:137 exited_at:{seconds:1747199503 nanos:537796270}" May 14 05:11:43.565962 containerd[1598]: time="2025-05-14T05:11:43.565761052Z" level=info msg="shim disconnected" id=8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2 namespace=k8s.io May 14 05:11:43.565962 containerd[1598]: time="2025-05-14T05:11:43.565795117Z" level=warning msg="cleaning up after shim disconnected" id=8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2 namespace=k8s.io May 14 05:11:43.565962 containerd[1598]: time="2025-05-14T05:11:43.565804575Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 05:11:43.568082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2-rootfs.mount: Deactivated successfully. May 14 05:11:43.584941 containerd[1598]: time="2025-05-14T05:11:43.584819442Z" level=info msg="received exit event sandbox_id:\"8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2\" exit_status:137 exited_at:{seconds:1747199503 nanos:537796270}" May 14 05:11:43.585748 containerd[1598]: time="2025-05-14T05:11:43.585678256Z" level=info msg="TearDown network for sandbox \"8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2\" successfully" May 14 05:11:43.585748 containerd[1598]: time="2025-05-14T05:11:43.585734835Z" level=info msg="StopPodSandbox for \"8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2\" returns successfully" May 14 05:11:43.588032 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8526bc6f1347e1c1a777ca2a91209070d1a4706d5db42bab2845136d73379ab2-shm.mount: Deactivated successfully. May 14 05:11:43.668809 kubelet[2831]: I0514 05:11:43.668764 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsdnp\" (UniqueName: \"kubernetes.io/projected/3ec6c052-4c37-45d2-94e6-d3178f9b2f81-kube-api-access-nsdnp\") pod \"3ec6c052-4c37-45d2-94e6-d3178f9b2f81\" (UID: \"3ec6c052-4c37-45d2-94e6-d3178f9b2f81\") " May 14 05:11:43.669255 kubelet[2831]: I0514 05:11:43.668823 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3ec6c052-4c37-45d2-94e6-d3178f9b2f81-typha-certs\") pod \"3ec6c052-4c37-45d2-94e6-d3178f9b2f81\" (UID: \"3ec6c052-4c37-45d2-94e6-d3178f9b2f81\") " May 14 05:11:43.669255 kubelet[2831]: I0514 05:11:43.668857 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ec6c052-4c37-45d2-94e6-d3178f9b2f81-tigera-ca-bundle\") pod \"3ec6c052-4c37-45d2-94e6-d3178f9b2f81\" (UID: \"3ec6c052-4c37-45d2-94e6-d3178f9b2f81\") " May 14 05:11:43.674994 kubelet[2831]: I0514 05:11:43.674261 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec6c052-4c37-45d2-94e6-d3178f9b2f81-kube-api-access-nsdnp" (OuterVolumeSpecName: "kube-api-access-nsdnp") pod "3ec6c052-4c37-45d2-94e6-d3178f9b2f81" (UID: "3ec6c052-4c37-45d2-94e6-d3178f9b2f81"). InnerVolumeSpecName "kube-api-access-nsdnp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 05:11:43.675291 kubelet[2831]: I0514 05:11:43.675233 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ec6c052-4c37-45d2-94e6-d3178f9b2f81-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "3ec6c052-4c37-45d2-94e6-d3178f9b2f81" (UID: "3ec6c052-4c37-45d2-94e6-d3178f9b2f81"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 05:11:43.676349 systemd[1]: var-lib-kubelet-pods-3ec6c052\x2d4c37\x2d45d2\x2d94e6\x2dd3178f9b2f81-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnsdnp.mount: Deactivated successfully. May 14 05:11:43.676488 systemd[1]: var-lib-kubelet-pods-3ec6c052\x2d4c37\x2d45d2\x2d94e6\x2dd3178f9b2f81-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. May 14 05:11:43.680761 kubelet[2831]: I0514 05:11:43.680685 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec6c052-4c37-45d2-94e6-d3178f9b2f81-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "3ec6c052-4c37-45d2-94e6-d3178f9b2f81" (UID: "3ec6c052-4c37-45d2-94e6-d3178f9b2f81"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 05:11:43.682081 systemd[1]: var-lib-kubelet-pods-3ec6c052\x2d4c37\x2d45d2\x2d94e6\x2dd3178f9b2f81-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. May 14 05:11:43.769614 kubelet[2831]: I0514 05:11:43.769557 2831 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nsdnp\" (UniqueName: \"kubernetes.io/projected/3ec6c052-4c37-45d2-94e6-d3178f9b2f81-kube-api-access-nsdnp\") on node \"localhost\" DevicePath \"\"" May 14 05:11:43.769614 kubelet[2831]: I0514 05:11:43.769593 2831 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3ec6c052-4c37-45d2-94e6-d3178f9b2f81-typha-certs\") on node \"localhost\" DevicePath \"\"" May 14 05:11:43.769614 kubelet[2831]: I0514 05:11:43.769601 2831 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ec6c052-4c37-45d2-94e6-d3178f9b2f81-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 14 05:11:44.016345 kubelet[2831]: I0514 05:11:44.015736 2831 scope.go:117] "RemoveContainer" containerID="6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab" May 14 05:11:44.020181 containerd[1598]: time="2025-05-14T05:11:44.020149271Z" level=info msg="RemoveContainer for \"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\"" May 14 05:11:44.027244 containerd[1598]: time="2025-05-14T05:11:44.027204297Z" level=info msg="RemoveContainer for \"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\" returns successfully" May 14 05:11:44.027699 systemd[1]: Removed slice kubepods-besteffort-pod3ec6c052_4c37_45d2_94e6_d3178f9b2f81.slice - libcontainer container kubepods-besteffort-pod3ec6c052_4c37_45d2_94e6_d3178f9b2f81.slice. May 14 05:11:44.030912 containerd[1598]: time="2025-05-14T05:11:44.028630681Z" level=error msg="ContainerStatus for \"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\": not found" May 14 05:11:44.030954 kubelet[2831]: I0514 05:11:44.028387 2831 scope.go:117] "RemoveContainer" containerID="6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab" May 14 05:11:44.030954 kubelet[2831]: E0514 05:11:44.028831 2831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\": not found" containerID="6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab" May 14 05:11:44.030954 kubelet[2831]: I0514 05:11:44.028898 2831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab"} err="failed to get container status \"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"6377a252762c862cd2a9f7469b3c55b75d88f0c421a581386e1a4bc284e360ab\": not found" May 14 05:11:44.027834 systemd[1]: kubepods-besteffort-pod3ec6c052_4c37_45d2_94e6_d3178f9b2f81.slice: Consumed 349ms CPU time, 30.3M memory peak, 7.1M read from disk. May 14 05:11:45.465584 kubelet[2831]: I0514 05:11:45.465529 2831 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ec6c052-4c37-45d2-94e6-d3178f9b2f81" path="/var/lib/kubelet/pods/3ec6c052-4c37-45d2-94e6-d3178f9b2f81/volumes" May 14 05:11:47.187862 systemd[1]: Started sshd@22-10.0.0.84:22-10.0.0.1:48332.service - OpenSSH per-connection server daemon (10.0.0.1:48332). May 14 05:11:47.240661 sshd[5982]: Accepted publickey for core from 10.0.0.1 port 48332 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:47.242041 sshd-session[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:47.246491 systemd-logind[1573]: New session 23 of user core. May 14 05:11:47.253937 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 05:11:47.377601 sshd[5984]: Connection closed by 10.0.0.1 port 48332 May 14 05:11:47.377892 sshd-session[5982]: pam_unix(sshd:session): session closed for user core May 14 05:11:47.382290 systemd[1]: sshd@22-10.0.0.84:22-10.0.0.1:48332.service: Deactivated successfully. May 14 05:11:47.384517 systemd[1]: session-23.scope: Deactivated successfully. May 14 05:11:47.385395 systemd-logind[1573]: Session 23 logged out. Waiting for processes to exit. May 14 05:11:47.386921 systemd-logind[1573]: Removed session 23. May 14 05:11:48.723197 kubelet[2831]: I0514 05:11:48.723141 2831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 05:11:49.376034 containerd[1598]: time="2025-05-14T05:11:49.375982905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"038e8e912ce25038576cc0267e20c1af3d3b605f82a4a3debd273f138d7f7e47\" id:\"a5b7ddcdc631c62745435894be2398eea609210f4aac34841b1b23fe32f55cf1\" pid:6061 exit_status:1 exited_at:{seconds:1747199509 nanos:375501512}" May 14 05:11:52.399030 systemd[1]: Started sshd@23-10.0.0.84:22-10.0.0.1:48740.service - OpenSSH per-connection server daemon (10.0.0.1:48740). May 14 05:11:52.445814 sshd[6144]: Accepted publickey for core from 10.0.0.1 port 48740 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:52.447170 sshd-session[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:52.451232 systemd-logind[1573]: New session 24 of user core. May 14 05:11:52.461832 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 05:11:52.562701 sshd[6146]: Connection closed by 10.0.0.1 port 48740 May 14 05:11:52.562997 sshd-session[6144]: pam_unix(sshd:session): session closed for user core May 14 05:11:52.566844 systemd[1]: sshd@23-10.0.0.84:22-10.0.0.1:48740.service: Deactivated successfully. May 14 05:11:52.568719 systemd[1]: session-24.scope: Deactivated successfully. May 14 05:11:52.569483 systemd-logind[1573]: Session 24 logged out. Waiting for processes to exit. May 14 05:11:52.570655 systemd-logind[1573]: Removed session 24. May 14 05:11:57.580263 systemd[1]: Started sshd@24-10.0.0.84:22-10.0.0.1:48754.service - OpenSSH per-connection server daemon (10.0.0.1:48754). May 14 05:11:57.642261 sshd[6251]: Accepted publickey for core from 10.0.0.1 port 48754 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:11:57.643883 sshd-session[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:11:57.648135 systemd-logind[1573]: New session 25 of user core. May 14 05:11:57.654825 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 05:11:57.761818 sshd[6253]: Connection closed by 10.0.0.1 port 48754 May 14 05:11:57.762118 sshd-session[6251]: pam_unix(sshd:session): session closed for user core May 14 05:11:57.765904 systemd[1]: sshd@24-10.0.0.84:22-10.0.0.1:48754.service: Deactivated successfully. May 14 05:11:57.767863 systemd[1]: session-25.scope: Deactivated successfully. May 14 05:11:57.768647 systemd-logind[1573]: Session 25 logged out. Waiting for processes to exit. May 14 05:11:57.770042 systemd-logind[1573]: Removed session 25. May 14 05:12:00.892872 kubelet[2831]: I0514 05:12:00.892832 2831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 05:12:00.935908 containerd[1598]: time="2025-05-14T05:12:00.935845054Z" level=info msg="StopContainer for \"9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40\" with timeout 30 (s)" May 14 05:12:00.937525 containerd[1598]: time="2025-05-14T05:12:00.937491535Z" level=info msg="Stop container \"9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40\" with signal terminated" May 14 05:12:00.959225 systemd[1]: cri-containerd-9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40.scope: Deactivated successfully. May 14 05:12:00.966098 containerd[1598]: time="2025-05-14T05:12:00.965913769Z" level=info msg="received exit event container_id:\"9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40\" id:\"9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40\" pid:5312 exit_status:1 exited_at:{seconds:1747199520 nanos:964295130}" May 14 05:12:00.967175 containerd[1598]: time="2025-05-14T05:12:00.967147813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40\" id:\"9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40\" pid:5312 exit_status:1 exited_at:{seconds:1747199520 nanos:964295130}" May 14 05:12:00.996236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40-rootfs.mount: Deactivated successfully. May 14 05:12:01.019348 containerd[1598]: time="2025-05-14T05:12:01.019311667Z" level=info msg="StopContainer for \"9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40\" returns successfully" May 14 05:12:01.019809 containerd[1598]: time="2025-05-14T05:12:01.019783337Z" level=info msg="StopPodSandbox for \"41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf\"" May 14 05:12:01.019858 containerd[1598]: time="2025-05-14T05:12:01.019841869Z" level=info msg="Container to stop \"9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:12:01.027051 systemd[1]: cri-containerd-41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf.scope: Deactivated successfully. May 14 05:12:01.027852 containerd[1598]: time="2025-05-14T05:12:01.027818420Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf\" id:\"41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf\" pid:4798 exit_status:137 exited_at:{seconds:1747199521 nanos:27496176}" May 14 05:12:01.056384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf-rootfs.mount: Deactivated successfully. May 14 05:12:01.060405 containerd[1598]: time="2025-05-14T05:12:01.060353987Z" level=info msg="shim disconnected" id=41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf namespace=k8s.io May 14 05:12:01.060405 containerd[1598]: time="2025-05-14T05:12:01.060390246Z" level=warning msg="cleaning up after shim disconnected" id=41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf namespace=k8s.io May 14 05:12:01.060585 containerd[1598]: time="2025-05-14T05:12:01.060400044Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 05:12:01.080390 containerd[1598]: time="2025-05-14T05:12:01.080334691Z" level=info msg="received exit event sandbox_id:\"41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf\" exit_status:137 exited_at:{seconds:1747199521 nanos:27496176}" May 14 05:12:01.083577 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf-shm.mount: Deactivated successfully. May 14 05:12:01.136861 systemd-networkd[1497]: calic65ff570fa4: Link DOWN May 14 05:12:01.136870 systemd-networkd[1497]: calic65ff570fa4: Lost carrier May 14 05:12:01.235363 containerd[1598]: 2025-05-14 05:12:01.135 [INFO][6407] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" May 14 05:12:01.235363 containerd[1598]: 2025-05-14 05:12:01.135 [INFO][6407] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" iface="eth0" netns="/var/run/netns/cni-d933badc-5e39-871c-33ff-2483199ac325" May 14 05:12:01.235363 containerd[1598]: 2025-05-14 05:12:01.135 [INFO][6407] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" iface="eth0" netns="/var/run/netns/cni-d933badc-5e39-871c-33ff-2483199ac325" May 14 05:12:01.235363 containerd[1598]: 2025-05-14 05:12:01.143 [INFO][6407] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" after=8.340837ms iface="eth0" netns="/var/run/netns/cni-d933badc-5e39-871c-33ff-2483199ac325" May 14 05:12:01.235363 containerd[1598]: 2025-05-14 05:12:01.143 [INFO][6407] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" May 14 05:12:01.235363 containerd[1598]: 2025-05-14 05:12:01.143 [INFO][6407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" May 14 05:12:01.235363 containerd[1598]: 2025-05-14 05:12:01.164 [INFO][6417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" HandleID="k8s-pod-network.41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" Workload="localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0" May 14 05:12:01.235363 containerd[1598]: 2025-05-14 05:12:01.165 [INFO][6417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 05:12:01.235363 containerd[1598]: 2025-05-14 05:12:01.165 [INFO][6417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 05:12:01.235363 containerd[1598]: 2025-05-14 05:12:01.228 [INFO][6417] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" HandleID="k8s-pod-network.41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" Workload="localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0" May 14 05:12:01.235363 containerd[1598]: 2025-05-14 05:12:01.228 [INFO][6417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" HandleID="k8s-pod-network.41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" Workload="localhost-k8s-calico--apiserver--845d69865c--7txs2-eth0" May 14 05:12:01.235363 containerd[1598]: 2025-05-14 05:12:01.230 [INFO][6417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 05:12:01.235363 containerd[1598]: 2025-05-14 05:12:01.233 [INFO][6407] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf" May 14 05:12:01.236362 containerd[1598]: time="2025-05-14T05:12:01.236239054Z" level=info msg="TearDown network for sandbox \"41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf\" successfully" May 14 05:12:01.236362 containerd[1598]: time="2025-05-14T05:12:01.236270034Z" level=info msg="StopPodSandbox for \"41b1807c7bba99ab83eb1b4d6d4427b7402268310c390113bfb2bfbb78cb04bf\" returns successfully" May 14 05:12:01.238433 systemd[1]: run-netns-cni\x2dd933badc\x2d5e39\x2d871c\x2d33ff\x2d2483199ac325.mount: Deactivated successfully. May 14 05:12:01.265370 kubelet[2831]: I0514 05:12:01.265330 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/518875d2-b6d3-47f4-be27-8df3a2cdbf54-calico-apiserver-certs\") pod \"518875d2-b6d3-47f4-be27-8df3a2cdbf54\" (UID: \"518875d2-b6d3-47f4-be27-8df3a2cdbf54\") " May 14 05:12:01.265370 kubelet[2831]: I0514 05:12:01.265371 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks2d8\" (UniqueName: \"kubernetes.io/projected/518875d2-b6d3-47f4-be27-8df3a2cdbf54-kube-api-access-ks2d8\") pod \"518875d2-b6d3-47f4-be27-8df3a2cdbf54\" (UID: \"518875d2-b6d3-47f4-be27-8df3a2cdbf54\") " May 14 05:12:01.269120 kubelet[2831]: I0514 05:12:01.269055 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/518875d2-b6d3-47f4-be27-8df3a2cdbf54-kube-api-access-ks2d8" (OuterVolumeSpecName: "kube-api-access-ks2d8") pod "518875d2-b6d3-47f4-be27-8df3a2cdbf54" (UID: "518875d2-b6d3-47f4-be27-8df3a2cdbf54"). InnerVolumeSpecName "kube-api-access-ks2d8". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 05:12:01.269330 kubelet[2831]: I0514 05:12:01.269280 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/518875d2-b6d3-47f4-be27-8df3a2cdbf54-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "518875d2-b6d3-47f4-be27-8df3a2cdbf54" (UID: "518875d2-b6d3-47f4-be27-8df3a2cdbf54"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 05:12:01.271174 systemd[1]: var-lib-kubelet-pods-518875d2\x2db6d3\x2d47f4\x2dbe27\x2d8df3a2cdbf54-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dks2d8.mount: Deactivated successfully. May 14 05:12:01.271313 systemd[1]: var-lib-kubelet-pods-518875d2\x2db6d3\x2d47f4\x2dbe27\x2d8df3a2cdbf54-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 14 05:12:01.366263 kubelet[2831]: I0514 05:12:01.365886 2831 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/518875d2-b6d3-47f4-be27-8df3a2cdbf54-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 14 05:12:01.366263 kubelet[2831]: I0514 05:12:01.365922 2831 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ks2d8\" (UniqueName: \"kubernetes.io/projected/518875d2-b6d3-47f4-be27-8df3a2cdbf54-kube-api-access-ks2d8\") on node \"localhost\" DevicePath \"\"" May 14 05:12:01.471440 systemd[1]: Removed slice kubepods-besteffort-pod518875d2_b6d3_47f4_be27_8df3a2cdbf54.slice - libcontainer container kubepods-besteffort-pod518875d2_b6d3_47f4_be27_8df3a2cdbf54.slice. May 14 05:12:02.052338 kubelet[2831]: I0514 05:12:02.052303 2831 scope.go:117] "RemoveContainer" containerID="9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40" May 14 05:12:02.054763 containerd[1598]: time="2025-05-14T05:12:02.054700275Z" level=info msg="RemoveContainer for \"9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40\"" May 14 05:12:02.060235 containerd[1598]: time="2025-05-14T05:12:02.060180892Z" level=info msg="RemoveContainer for \"9f314810d2e280184120cf0f78dc58a8b5cebf6c580e3af44ad634c75b75ef40\" returns successfully" May 14 05:12:02.777265 systemd[1]: Started sshd@25-10.0.0.84:22-10.0.0.1:54156.service - OpenSSH per-connection server daemon (10.0.0.1:54156). May 14 05:12:02.815786 sshd[6473]: Accepted publickey for core from 10.0.0.1 port 54156 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:12:02.817421 sshd-session[6473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:12:02.821987 systemd-logind[1573]: New session 26 of user core. May 14 05:12:02.835815 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 05:12:02.945229 sshd[6475]: Connection closed by 10.0.0.1 port 54156 May 14 05:12:02.945514 sshd-session[6473]: pam_unix(sshd:session): session closed for user core May 14 05:12:02.949909 systemd[1]: sshd@25-10.0.0.84:22-10.0.0.1:54156.service: Deactivated successfully. May 14 05:12:02.952080 systemd[1]: session-26.scope: Deactivated successfully. May 14 05:12:02.952794 systemd-logind[1573]: Session 26 logged out. Waiting for processes to exit. May 14 05:12:02.954177 systemd-logind[1573]: Removed session 26. May 14 05:12:03.463853 kubelet[2831]: I0514 05:12:03.463807 2831 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="518875d2-b6d3-47f4-be27-8df3a2cdbf54" path="/var/lib/kubelet/pods/518875d2-b6d3-47f4-be27-8df3a2cdbf54/volumes" May 14 05:12:05.116372 containerd[1598]: time="2025-05-14T05:12:05.116336844Z" level=info msg="StopContainer for \"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\" with timeout 30 (s)" May 14 05:12:05.117927 containerd[1598]: time="2025-05-14T05:12:05.116976482Z" level=info msg="Stop container \"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\" with signal terminated" May 14 05:12:05.142985 systemd[1]: cri-containerd-45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e.scope: Deactivated successfully. May 14 05:12:05.143354 systemd[1]: cri-containerd-45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e.scope: Consumed 827ms CPU time, 52.9M memory peak, 1M read from disk. May 14 05:12:05.145027 containerd[1598]: time="2025-05-14T05:12:05.144975487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\" id:\"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\" pid:5227 exit_status:1 exited_at:{seconds:1747199525 nanos:144098767}" May 14 05:12:05.145147 containerd[1598]: time="2025-05-14T05:12:05.145057172Z" level=info msg="received exit event container_id:\"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\" id:\"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\" pid:5227 exit_status:1 exited_at:{seconds:1747199525 nanos:144098767}" May 14 05:12:05.168151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e-rootfs.mount: Deactivated successfully. May 14 05:12:05.217788 containerd[1598]: time="2025-05-14T05:12:05.217745471Z" level=info msg="StopContainer for \"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\" returns successfully" May 14 05:12:05.218346 containerd[1598]: time="2025-05-14T05:12:05.218291761Z" level=info msg="StopPodSandbox for \"ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0\"" May 14 05:12:05.218494 containerd[1598]: time="2025-05-14T05:12:05.218369118Z" level=info msg="Container to stop \"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:12:05.225503 systemd[1]: cri-containerd-ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0.scope: Deactivated successfully. May 14 05:12:05.226985 containerd[1598]: time="2025-05-14T05:12:05.226935089Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0\" id:\"ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0\" pid:4658 exit_status:137 exited_at:{seconds:1747199525 nanos:226557049}" May 14 05:12:05.254121 containerd[1598]: time="2025-05-14T05:12:05.254077572Z" level=info msg="shim disconnected" id=ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0 namespace=k8s.io May 14 05:12:05.254121 containerd[1598]: time="2025-05-14T05:12:05.254113651Z" level=warning msg="cleaning up after shim disconnected" id=ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0 namespace=k8s.io May 14 05:12:05.254261 containerd[1598]: time="2025-05-14T05:12:05.254121196Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 05:12:05.255640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0-rootfs.mount: Deactivated successfully. May 14 05:12:05.273942 containerd[1598]: time="2025-05-14T05:12:05.272501554Z" level=info msg="received exit event sandbox_id:\"ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0\" exit_status:137 exited_at:{seconds:1747199525 nanos:226557049}" May 14 05:12:05.274594 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0-shm.mount: Deactivated successfully. May 14 05:12:05.317228 systemd-networkd[1497]: califbe33fdce9e: Link DOWN May 14 05:12:05.317241 systemd-networkd[1497]: califbe33fdce9e: Lost carrier May 14 05:12:05.379537 containerd[1598]: 2025-05-14 05:12:05.316 [INFO][6612] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" May 14 05:12:05.379537 containerd[1598]: 2025-05-14 05:12:05.316 [INFO][6612] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" iface="eth0" netns="/var/run/netns/cni-5cc3b55f-a64b-d46b-7c06-5ed69e422cb8" May 14 05:12:05.379537 containerd[1598]: 2025-05-14 05:12:05.316 [INFO][6612] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" iface="eth0" netns="/var/run/netns/cni-5cc3b55f-a64b-d46b-7c06-5ed69e422cb8" May 14 05:12:05.379537 containerd[1598]: 2025-05-14 05:12:05.323 [INFO][6612] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" after=7.502566ms iface="eth0" netns="/var/run/netns/cni-5cc3b55f-a64b-d46b-7c06-5ed69e422cb8" May 14 05:12:05.379537 containerd[1598]: 2025-05-14 05:12:05.324 [INFO][6612] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" May 14 05:12:05.379537 containerd[1598]: 2025-05-14 05:12:05.324 [INFO][6612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" May 14 05:12:05.379537 containerd[1598]: 2025-05-14 05:12:05.342 [INFO][6622] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" HandleID="k8s-pod-network.ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" Workload="localhost-k8s-calico--apiserver--845d69865c--92j47-eth0" May 14 05:12:05.379537 containerd[1598]: 2025-05-14 05:12:05.343 [INFO][6622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 05:12:05.379537 containerd[1598]: 2025-05-14 05:12:05.343 [INFO][6622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 05:12:05.379537 containerd[1598]: 2025-05-14 05:12:05.373 [INFO][6622] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" HandleID="k8s-pod-network.ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" Workload="localhost-k8s-calico--apiserver--845d69865c--92j47-eth0" May 14 05:12:05.379537 containerd[1598]: 2025-05-14 05:12:05.373 [INFO][6622] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" HandleID="k8s-pod-network.ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" Workload="localhost-k8s-calico--apiserver--845d69865c--92j47-eth0" May 14 05:12:05.379537 containerd[1598]: 2025-05-14 05:12:05.374 [INFO][6622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 05:12:05.379537 containerd[1598]: 2025-05-14 05:12:05.377 [INFO][6612] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0" May 14 05:12:05.380600 containerd[1598]: time="2025-05-14T05:12:05.380556827Z" level=info msg="TearDown network for sandbox \"ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0\" successfully" May 14 05:12:05.380600 containerd[1598]: time="2025-05-14T05:12:05.380593798Z" level=info msg="StopPodSandbox for \"ac1107de38a87cc0791d41c5ba284d1b444a49584139fe12e9bac56b736733d0\" returns successfully" May 14 05:12:05.382944 systemd[1]: run-netns-cni\x2d5cc3b55f\x2da64b\x2dd46b\x2d7c06\x2d5ed69e422cb8.mount: Deactivated successfully. May 14 05:12:05.392678 kubelet[2831]: I0514 05:12:05.392648 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31e9255f-87a3-4bf3-a184-3d845602f03d-calico-apiserver-certs\") pod \"31e9255f-87a3-4bf3-a184-3d845602f03d\" (UID: \"31e9255f-87a3-4bf3-a184-3d845602f03d\") " May 14 05:12:05.395504 kubelet[2831]: I0514 05:12:05.395468 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31e9255f-87a3-4bf3-a184-3d845602f03d-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "31e9255f-87a3-4bf3-a184-3d845602f03d" (UID: "31e9255f-87a3-4bf3-a184-3d845602f03d"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 05:12:05.397286 systemd[1]: var-lib-kubelet-pods-31e9255f\x2d87a3\x2d4bf3\x2da184\x2d3d845602f03d-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 14 05:12:05.493726 kubelet[2831]: I0514 05:12:05.493675 2831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxnv8\" (UniqueName: \"kubernetes.io/projected/31e9255f-87a3-4bf3-a184-3d845602f03d-kube-api-access-vxnv8\") pod \"31e9255f-87a3-4bf3-a184-3d845602f03d\" (UID: \"31e9255f-87a3-4bf3-a184-3d845602f03d\") " May 14 05:12:05.493846 kubelet[2831]: I0514 05:12:05.493770 2831 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31e9255f-87a3-4bf3-a184-3d845602f03d-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 14 05:12:05.497093 kubelet[2831]: I0514 05:12:05.497049 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e9255f-87a3-4bf3-a184-3d845602f03d-kube-api-access-vxnv8" (OuterVolumeSpecName: "kube-api-access-vxnv8") pod "31e9255f-87a3-4bf3-a184-3d845602f03d" (UID: "31e9255f-87a3-4bf3-a184-3d845602f03d"). InnerVolumeSpecName "kube-api-access-vxnv8". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 05:12:05.594322 kubelet[2831]: I0514 05:12:05.594262 2831 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vxnv8\" (UniqueName: \"kubernetes.io/projected/31e9255f-87a3-4bf3-a184-3d845602f03d-kube-api-access-vxnv8\") on node \"localhost\" DevicePath \"\"" May 14 05:12:06.066621 kubelet[2831]: I0514 05:12:06.066574 2831 scope.go:117] "RemoveContainer" containerID="45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e" May 14 05:12:06.078736 containerd[1598]: time="2025-05-14T05:12:06.078134205Z" level=info msg="RemoveContainer for \"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\"" May 14 05:12:06.083162 systemd[1]: Removed slice kubepods-besteffort-pod31e9255f_87a3_4bf3_a184_3d845602f03d.slice - libcontainer container kubepods-besteffort-pod31e9255f_87a3_4bf3_a184_3d845602f03d.slice. May 14 05:12:06.083262 systemd[1]: kubepods-besteffort-pod31e9255f_87a3_4bf3_a184_3d845602f03d.slice: Consumed 855ms CPU time, 53.2M memory peak, 1M read from disk. May 14 05:12:06.085521 containerd[1598]: time="2025-05-14T05:12:06.085409022Z" level=info msg="RemoveContainer for \"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\" returns successfully" May 14 05:12:06.086223 kubelet[2831]: I0514 05:12:06.086184 2831 scope.go:117] "RemoveContainer" containerID="45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e" May 14 05:12:06.087499 containerd[1598]: time="2025-05-14T05:12:06.087305733Z" level=error msg="ContainerStatus for \"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\": not found" May 14 05:12:06.087861 kubelet[2831]: E0514 05:12:06.087827 2831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\": not found" containerID="45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e" May 14 05:12:06.088096 kubelet[2831]: I0514 05:12:06.088050 2831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e"} err="failed to get container status \"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\": rpc error: code = NotFound desc = an error occurred when try to find container \"45a3620f13bcc33160b77e93cf7e9c176ed9ea3a48b37c71b778368eea25475e\": not found" May 14 05:12:06.168116 systemd[1]: var-lib-kubelet-pods-31e9255f\x2d87a3\x2d4bf3\x2da184\x2d3d845602f03d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvxnv8.mount: Deactivated successfully. May 14 05:12:07.464563 kubelet[2831]: I0514 05:12:07.464523 2831 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31e9255f-87a3-4bf3-a184-3d845602f03d" path="/var/lib/kubelet/pods/31e9255f-87a3-4bf3-a184-3d845602f03d/volumes" May 14 05:12:07.965066 systemd[1]: Started sshd@26-10.0.0.84:22-10.0.0.1:54158.service - OpenSSH per-connection server daemon (10.0.0.1:54158). May 14 05:12:08.015347 sshd[6680]: Accepted publickey for core from 10.0.0.1 port 54158 ssh2: RSA SHA256:9779yoEmBEYtokxLadky4y4rhX8tiwkjz8vWSk0iWXU May 14 05:12:08.017078 sshd-session[6680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:12:08.021423 systemd-logind[1573]: New session 27 of user core. May 14 05:12:08.027841 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 05:12:08.134118 sshd[6682]: Connection closed by 10.0.0.1 port 54158 May 14 05:12:08.134442 sshd-session[6680]: pam_unix(sshd:session): session closed for user core May 14 05:12:08.138404 systemd[1]: sshd@26-10.0.0.84:22-10.0.0.1:54158.service: Deactivated successfully. May 14 05:12:08.140682 systemd[1]: session-27.scope: Deactivated successfully. May 14 05:12:08.141553 systemd-logind[1573]: Session 27 logged out. Waiting for processes to exit. May 14 05:12:08.143787 systemd-logind[1573]: Removed session 27.