May 14 18:13:08.837245 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 16:37:27 -00 2025 May 14 18:13:08.837276 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:13:08.837288 kernel: BIOS-provided physical RAM map: May 14 18:13:08.837295 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 18:13:08.837301 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 18:13:08.837308 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 18:13:08.837315 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 14 18:13:08.837322 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 18:13:08.837330 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 18:13:08.837337 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 18:13:08.837343 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 14 18:13:08.837350 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 18:13:08.837356 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 18:13:08.837363 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 18:13:08.837373 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 18:13:08.837380 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 18:13:08.837387 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 14 18:13:08.837394 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 14 18:13:08.837401 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 14 18:13:08.837408 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 14 18:13:08.837415 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 18:13:08.837422 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 18:13:08.837428 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 18:13:08.837435 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 18:13:08.837442 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 18:13:08.837451 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 18:13:08.837458 kernel: NX (Execute Disable) protection: active May 14 18:13:08.837465 kernel: APIC: Static calls initialized May 14 18:13:08.837472 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 14 18:13:08.837479 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 14 18:13:08.837486 kernel: extended physical RAM map: May 14 18:13:08.837493 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 18:13:08.837500 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 18:13:08.837507 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 18:13:08.837514 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 14 18:13:08.837521 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 18:13:08.837530 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 18:13:08.837537 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 18:13:08.837544 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 14 18:13:08.837565 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 14 18:13:08.837576 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 14 18:13:08.837583 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 14 18:13:08.837592 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 14 18:13:08.837599 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 18:13:08.837606 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 18:13:08.837614 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 18:13:08.837621 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 18:13:08.837628 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 18:13:08.837635 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 14 18:13:08.837643 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 14 18:13:08.837655 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 14 18:13:08.837665 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 14 18:13:08.837672 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 18:13:08.837680 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 18:13:08.837687 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 18:13:08.837695 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 18:13:08.837702 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 18:13:08.837710 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 18:13:08.837717 kernel: efi: EFI v2.7 by EDK II May 14 18:13:08.837724 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 14 18:13:08.837731 kernel: random: crng init done May 14 18:13:08.837739 kernel: efi: Remove mem149: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 14 18:13:08.837746 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 14 18:13:08.837755 kernel: secureboot: Secure boot disabled May 14 18:13:08.837762 kernel: SMBIOS 2.8 present. May 14 18:13:08.837769 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 14 18:13:08.837776 kernel: DMI: Memory slots populated: 1/1 May 14 18:13:08.837783 kernel: Hypervisor detected: KVM May 14 18:13:08.837790 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 18:13:08.837798 kernel: kvm-clock: using sched offset of 3630545179 cycles May 14 18:13:08.837805 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 18:13:08.837813 kernel: tsc: Detected 2794.746 MHz processor May 14 18:13:08.837820 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 18:13:08.837828 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 18:13:08.837837 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 14 18:13:08.837845 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 14 18:13:08.837852 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 18:13:08.837859 kernel: Using GB pages for direct mapping May 14 18:13:08.837867 kernel: ACPI: Early table checksum verification disabled May 14 18:13:08.837874 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 14 18:13:08.837881 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 14 18:13:08.837889 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:13:08.837896 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:13:08.837906 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 14 18:13:08.837913 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:13:08.837920 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:13:08.837928 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:13:08.837935 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 18:13:08.837943 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 14 18:13:08.837950 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 14 18:13:08.837957 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 14 18:13:08.837967 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 14 18:13:08.837974 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 14 18:13:08.837982 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 14 18:13:08.837989 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 14 18:13:08.837996 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 14 18:13:08.838004 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 14 18:13:08.838011 kernel: No NUMA configuration found May 14 18:13:08.838018 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 14 18:13:08.838025 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 14 18:13:08.838032 kernel: Zone ranges: May 14 18:13:08.838042 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 18:13:08.838049 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 14 18:13:08.838057 kernel: Normal empty May 14 18:13:08.838064 kernel: Device empty May 14 18:13:08.838071 kernel: Movable zone start for each node May 14 18:13:08.838078 kernel: Early memory node ranges May 14 18:13:08.838086 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 14 18:13:08.838093 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 14 18:13:08.838100 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 14 18:13:08.838109 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 14 18:13:08.838117 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 14 18:13:08.838124 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 14 18:13:08.838131 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 14 18:13:08.838138 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 14 18:13:08.838146 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 14 18:13:08.838153 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 18:13:08.838161 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 14 18:13:08.838176 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 14 18:13:08.838184 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 18:13:08.838191 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 14 18:13:08.838199 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 14 18:13:08.838209 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 14 18:13:08.838216 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 14 18:13:08.838224 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 14 18:13:08.838231 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 18:13:08.838239 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 18:13:08.838248 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 18:13:08.838256 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 18:13:08.838264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 18:13:08.838281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 18:13:08.838289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 18:13:08.838296 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 18:13:08.838304 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 18:13:08.838311 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 18:13:08.838319 kernel: TSC deadline timer available May 14 18:13:08.838328 kernel: CPU topo: Max. logical packages: 1 May 14 18:13:08.838336 kernel: CPU topo: Max. logical dies: 1 May 14 18:13:08.838343 kernel: CPU topo: Max. dies per package: 1 May 14 18:13:08.838351 kernel: CPU topo: Max. threads per core: 1 May 14 18:13:08.838358 kernel: CPU topo: Num. cores per package: 4 May 14 18:13:08.838366 kernel: CPU topo: Num. threads per package: 4 May 14 18:13:08.838373 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 14 18:13:08.838381 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 18:13:08.838389 kernel: kvm-guest: KVM setup pv remote TLB flush May 14 18:13:08.838396 kernel: kvm-guest: setup PV sched yield May 14 18:13:08.838406 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 14 18:13:08.838413 kernel: Booting paravirtualized kernel on KVM May 14 18:13:08.838421 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 18:13:08.838429 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 14 18:13:08.838437 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 14 18:13:08.838444 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 14 18:13:08.838452 kernel: pcpu-alloc: [0] 0 1 2 3 May 14 18:13:08.838459 kernel: kvm-guest: PV spinlocks enabled May 14 18:13:08.838467 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 14 18:13:08.838478 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:13:08.838486 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 18:13:08.838493 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 18:13:08.838501 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 18:13:08.838509 kernel: Fallback order for Node 0: 0 May 14 18:13:08.838516 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 14 18:13:08.838524 kernel: Policy zone: DMA32 May 14 18:13:08.838532 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 18:13:08.838541 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 18:13:08.838549 kernel: ftrace: allocating 40065 entries in 157 pages May 14 18:13:08.838649 kernel: ftrace: allocated 157 pages with 5 groups May 14 18:13:08.838656 kernel: Dynamic Preempt: voluntary May 14 18:13:08.838664 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 18:13:08.838672 kernel: rcu: RCU event tracing is enabled. May 14 18:13:08.838680 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 18:13:08.838688 kernel: Trampoline variant of Tasks RCU enabled. May 14 18:13:08.838696 kernel: Rude variant of Tasks RCU enabled. May 14 18:13:08.838706 kernel: Tracing variant of Tasks RCU enabled. May 14 18:13:08.838714 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 18:13:08.838721 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 18:13:08.838729 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:13:08.838737 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:13:08.838745 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 18:13:08.838752 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 14 18:13:08.838760 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 18:13:08.838768 kernel: Console: colour dummy device 80x25 May 14 18:13:08.838777 kernel: printk: legacy console [ttyS0] enabled May 14 18:13:08.838785 kernel: ACPI: Core revision 20240827 May 14 18:13:08.838793 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 18:13:08.838800 kernel: APIC: Switch to symmetric I/O mode setup May 14 18:13:08.838808 kernel: x2apic enabled May 14 18:13:08.838815 kernel: APIC: Switched APIC routing to: physical x2apic May 14 18:13:08.838823 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 14 18:13:08.838831 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 14 18:13:08.838839 kernel: kvm-guest: setup PV IPIs May 14 18:13:08.838848 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 18:13:08.838856 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns May 14 18:13:08.838864 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 14 18:13:08.838872 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 18:13:08.838879 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 18:13:08.838887 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 18:13:08.838895 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 18:13:08.838902 kernel: Spectre V2 : Mitigation: Retpolines May 14 18:13:08.838910 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 14 18:13:08.838920 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 14 18:13:08.838927 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 14 18:13:08.838935 kernel: RETBleed: Mitigation: untrained return thunk May 14 18:13:08.838943 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 18:13:08.838951 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 18:13:08.838958 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 14 18:13:08.838967 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 14 18:13:08.838974 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 14 18:13:08.838984 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 18:13:08.838992 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 18:13:08.838999 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 18:13:08.839007 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 18:13:08.839014 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 18:13:08.839022 kernel: Freeing SMP alternatives memory: 32K May 14 18:13:08.839030 kernel: pid_max: default: 32768 minimum: 301 May 14 18:13:08.839037 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 18:13:08.839045 kernel: landlock: Up and running. May 14 18:13:08.839054 kernel: SELinux: Initializing. May 14 18:13:08.839062 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:13:08.839070 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 18:13:08.839077 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 14 18:13:08.839085 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 18:13:08.839093 kernel: ... version: 0 May 14 18:13:08.839100 kernel: ... bit width: 48 May 14 18:13:08.839108 kernel: ... generic registers: 6 May 14 18:13:08.839115 kernel: ... value mask: 0000ffffffffffff May 14 18:13:08.839125 kernel: ... max period: 00007fffffffffff May 14 18:13:08.839132 kernel: ... fixed-purpose events: 0 May 14 18:13:08.839140 kernel: ... event mask: 000000000000003f May 14 18:13:08.839148 kernel: signal: max sigframe size: 1776 May 14 18:13:08.839155 kernel: rcu: Hierarchical SRCU implementation. May 14 18:13:08.839163 kernel: rcu: Max phase no-delay instances is 400. May 14 18:13:08.839171 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 18:13:08.839178 kernel: smp: Bringing up secondary CPUs ... May 14 18:13:08.839186 kernel: smpboot: x86: Booting SMP configuration: May 14 18:13:08.839194 kernel: .... node #0, CPUs: #1 #2 #3 May 14 18:13:08.839204 kernel: smp: Brought up 1 node, 4 CPUs May 14 18:13:08.839211 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 14 18:13:08.839219 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54424K init, 2536K bss, 137196K reserved, 0K cma-reserved) May 14 18:13:08.839227 kernel: devtmpfs: initialized May 14 18:13:08.839234 kernel: x86/mm: Memory block size: 128MB May 14 18:13:08.839242 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 14 18:13:08.839250 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 14 18:13:08.839258 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 14 18:13:08.839276 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 14 18:13:08.839284 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 14 18:13:08.839292 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 14 18:13:08.839300 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 18:13:08.839307 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 18:13:08.839315 kernel: pinctrl core: initialized pinctrl subsystem May 14 18:13:08.839322 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 18:13:08.839330 kernel: audit: initializing netlink subsys (disabled) May 14 18:13:08.839338 kernel: audit: type=2000 audit(1747246387.009:1): state=initialized audit_enabled=0 res=1 May 14 18:13:08.839348 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 18:13:08.839355 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 18:13:08.839363 kernel: cpuidle: using governor menu May 14 18:13:08.839370 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 18:13:08.839378 kernel: dca service started, version 1.12.1 May 14 18:13:08.839386 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 14 18:13:08.839393 kernel: PCI: Using configuration type 1 for base access May 14 18:13:08.839401 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 18:13:08.839409 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 18:13:08.839419 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 18:13:08.839426 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 18:13:08.839434 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 18:13:08.839442 kernel: ACPI: Added _OSI(Module Device) May 14 18:13:08.839449 kernel: ACPI: Added _OSI(Processor Device) May 14 18:13:08.839457 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 18:13:08.839464 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 18:13:08.839472 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 18:13:08.839480 kernel: ACPI: Interpreter enabled May 14 18:13:08.839489 kernel: ACPI: PM: (supports S0 S3 S5) May 14 18:13:08.839497 kernel: ACPI: Using IOAPIC for interrupt routing May 14 18:13:08.839505 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 18:13:08.839513 kernel: PCI: Using E820 reservations for host bridge windows May 14 18:13:08.839520 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 18:13:08.839528 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 18:13:08.839713 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 18:13:08.839832 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 18:13:08.839949 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 18:13:08.839959 kernel: PCI host bridge to bus 0000:00 May 14 18:13:08.840077 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 18:13:08.840183 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 18:13:08.840306 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 18:13:08.840409 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 14 18:13:08.840511 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 14 18:13:08.840644 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 14 18:13:08.840749 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 18:13:08.840880 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 14 18:13:08.841004 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 14 18:13:08.841117 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 14 18:13:08.841229 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 14 18:13:08.841360 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 14 18:13:08.841473 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 18:13:08.841615 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 14 18:13:08.841764 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 14 18:13:08.841908 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 14 18:13:08.842023 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 14 18:13:08.842153 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 14 18:13:08.842283 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 14 18:13:08.842398 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 14 18:13:08.842512 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 14 18:13:08.842649 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 14 18:13:08.842764 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 14 18:13:08.842877 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 14 18:13:08.842990 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 14 18:13:08.843108 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 14 18:13:08.843237 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 14 18:13:08.843380 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 18:13:08.843515 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 14 18:13:08.843667 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 14 18:13:08.843782 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 14 18:13:08.843905 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 14 18:13:08.844018 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 14 18:13:08.844029 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 18:13:08.844037 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 18:13:08.844045 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 18:13:08.844052 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 18:13:08.844060 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 18:13:08.844067 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 18:13:08.844078 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 18:13:08.844086 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 18:13:08.844094 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 18:13:08.844101 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 18:13:08.844109 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 18:13:08.844117 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 18:13:08.844124 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 18:13:08.844132 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 18:13:08.844140 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 18:13:08.844150 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 18:13:08.844158 kernel: iommu: Default domain type: Translated May 14 18:13:08.844165 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 18:13:08.844173 kernel: efivars: Registered efivars operations May 14 18:13:08.844181 kernel: PCI: Using ACPI for IRQ routing May 14 18:13:08.844188 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 18:13:08.844196 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 14 18:13:08.844204 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 14 18:13:08.844211 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 14 18:13:08.844219 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 14 18:13:08.844229 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 14 18:13:08.844237 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 14 18:13:08.844244 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 14 18:13:08.844252 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 14 18:13:08.844375 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 18:13:08.844497 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 18:13:08.844640 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 18:13:08.844656 kernel: vgaarb: loaded May 14 18:13:08.844664 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 18:13:08.844672 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 18:13:08.844680 kernel: clocksource: Switched to clocksource kvm-clock May 14 18:13:08.844688 kernel: VFS: Disk quotas dquot_6.6.0 May 14 18:13:08.844696 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 18:13:08.844704 kernel: pnp: PnP ACPI init May 14 18:13:08.844844 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 14 18:13:08.844859 kernel: pnp: PnP ACPI: found 6 devices May 14 18:13:08.844869 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 18:13:08.844877 kernel: NET: Registered PF_INET protocol family May 14 18:13:08.844885 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 18:13:08.844894 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 18:13:08.844902 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 18:13:08.844910 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 18:13:08.844919 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 18:13:08.844927 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 18:13:08.844937 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:13:08.844946 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 18:13:08.844954 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 18:13:08.844963 kernel: NET: Registered PF_XDP protocol family May 14 18:13:08.845090 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 14 18:13:08.845264 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 14 18:13:08.845393 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 18:13:08.845499 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 18:13:08.845656 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 18:13:08.845787 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 14 18:13:08.845894 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 14 18:13:08.845998 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 14 18:13:08.846008 kernel: PCI: CLS 0 bytes, default 64 May 14 18:13:08.846017 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns May 14 18:13:08.846025 kernel: Initialise system trusted keyrings May 14 18:13:08.846037 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 18:13:08.846046 kernel: Key type asymmetric registered May 14 18:13:08.846054 kernel: Asymmetric key parser 'x509' registered May 14 18:13:08.846062 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 18:13:08.846070 kernel: io scheduler mq-deadline registered May 14 18:13:08.846078 kernel: io scheduler kyber registered May 14 18:13:08.846086 kernel: io scheduler bfq registered May 14 18:13:08.846094 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 18:13:08.846105 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 18:13:08.846113 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 18:13:08.846122 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 14 18:13:08.846130 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 18:13:08.846138 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 18:13:08.846146 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 18:13:08.846154 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 18:13:08.846162 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 18:13:08.846306 kernel: rtc_cmos 00:04: RTC can wake from S4 May 14 18:13:08.846322 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 18:13:08.846445 kernel: rtc_cmos 00:04: registered as rtc0 May 14 18:13:08.846568 kernel: rtc_cmos 00:04: setting system clock to 2025-05-14T18:13:08 UTC (1747246388) May 14 18:13:08.846678 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 14 18:13:08.846689 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 18:13:08.846697 kernel: efifb: probing for efifb May 14 18:13:08.846705 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 14 18:13:08.846717 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 14 18:13:08.846725 kernel: efifb: scrolling: redraw May 14 18:13:08.846733 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 14 18:13:08.846741 kernel: Console: switching to colour frame buffer device 160x50 May 14 18:13:08.846749 kernel: fb0: EFI VGA frame buffer device May 14 18:13:08.846757 kernel: pstore: Using crash dump compression: deflate May 14 18:13:08.846765 kernel: pstore: Registered efi_pstore as persistent store backend May 14 18:13:08.846773 kernel: NET: Registered PF_INET6 protocol family May 14 18:13:08.846781 kernel: Segment Routing with IPv6 May 14 18:13:08.846789 kernel: In-situ OAM (IOAM) with IPv6 May 14 18:13:08.846799 kernel: NET: Registered PF_PACKET protocol family May 14 18:13:08.846807 kernel: Key type dns_resolver registered May 14 18:13:08.846815 kernel: IPI shorthand broadcast: enabled May 14 18:13:08.846823 kernel: sched_clock: Marking stable (2852003263, 160228707)->(3029269387, -17037417) May 14 18:13:08.846831 kernel: registered taskstats version 1 May 14 18:13:08.846839 kernel: Loading compiled-in X.509 certificates May 14 18:13:08.846847 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 41e2a150aa08ec2528be2394819b3db677e5f4ef' May 14 18:13:08.846855 kernel: Demotion targets for Node 0: null May 14 18:13:08.846863 kernel: Key type .fscrypt registered May 14 18:13:08.846873 kernel: Key type fscrypt-provisioning registered May 14 18:13:08.846882 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 18:13:08.846890 kernel: ima: Allocated hash algorithm: sha1 May 14 18:13:08.846898 kernel: ima: No architecture policies found May 14 18:13:08.846906 kernel: clk: Disabling unused clocks May 14 18:13:08.846913 kernel: Warning: unable to open an initial console. May 14 18:13:08.846922 kernel: Freeing unused kernel image (initmem) memory: 54424K May 14 18:13:08.846930 kernel: Write protecting the kernel read-only data: 24576k May 14 18:13:08.846941 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 14 18:13:08.846949 kernel: Run /init as init process May 14 18:13:08.846957 kernel: with arguments: May 14 18:13:08.846964 kernel: /init May 14 18:13:08.846972 kernel: with environment: May 14 18:13:08.846980 kernel: HOME=/ May 14 18:13:08.847060 kernel: TERM=linux May 14 18:13:08.847068 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 18:13:08.847078 systemd[1]: Successfully made /usr/ read-only. May 14 18:13:08.847092 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:13:08.847102 systemd[1]: Detected virtualization kvm. May 14 18:13:08.847111 systemd[1]: Detected architecture x86-64. May 14 18:13:08.847119 systemd[1]: Running in initrd. May 14 18:13:08.847127 systemd[1]: No hostname configured, using default hostname. May 14 18:13:08.847136 systemd[1]: Hostname set to . May 14 18:13:08.847145 systemd[1]: Initializing machine ID from VM UUID. May 14 18:13:08.847156 systemd[1]: Queued start job for default target initrd.target. May 14 18:13:08.847164 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:13:08.847173 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:13:08.847183 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 18:13:08.847192 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:13:08.847201 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 18:13:08.847210 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 18:13:08.847222 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 18:13:08.847231 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 18:13:08.847239 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:13:08.847248 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:13:08.847256 systemd[1]: Reached target paths.target - Path Units. May 14 18:13:08.847274 systemd[1]: Reached target slices.target - Slice Units. May 14 18:13:08.847284 systemd[1]: Reached target swap.target - Swaps. May 14 18:13:08.847292 systemd[1]: Reached target timers.target - Timer Units. May 14 18:13:08.847301 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:13:08.847312 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:13:08.847323 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 18:13:08.847332 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 18:13:08.847341 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:13:08.847351 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:13:08.847361 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:13:08.847371 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:13:08.847380 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 18:13:08.847392 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:13:08.847401 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 18:13:08.847411 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 18:13:08.847420 systemd[1]: Starting systemd-fsck-usr.service... May 14 18:13:08.847430 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:13:08.847439 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:13:08.847448 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:13:08.847457 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 18:13:08.847468 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:13:08.847477 systemd[1]: Finished systemd-fsck-usr.service. May 14 18:13:08.847508 systemd-journald[218]: Collecting audit messages is disabled. May 14 18:13:08.847532 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:13:08.847542 systemd-journald[218]: Journal started May 14 18:13:08.847590 systemd-journald[218]: Runtime Journal (/run/log/journal/84c3c5932842412dbcba688e1d54d2f2) is 6M, max 48.5M, 42.4M free. May 14 18:13:08.843657 systemd-modules-load[221]: Inserted module 'overlay' May 14 18:13:08.852976 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:13:08.852836 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:13:08.856972 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:13:08.862724 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 18:13:08.865895 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:13:08.866478 systemd-tmpfiles[234]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 18:13:08.875583 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 18:13:08.877360 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:13:08.880437 kernel: Bridge firewalling registered May 14 18:13:08.877952 systemd-modules-load[221]: Inserted module 'br_netfilter' May 14 18:13:08.881707 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:13:08.882371 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:13:08.886760 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:13:08.898709 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:13:08.899519 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:13:08.903284 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 18:13:08.916700 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:13:08.918616 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:13:08.933439 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=adf4ab3cd3fc72d424aa1ba920dfa0e67212fa35eadab2c698966b09b9e294b0 May 14 18:13:08.981599 systemd-resolved[263]: Positive Trust Anchors: May 14 18:13:08.981617 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:13:08.981647 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:13:08.984666 systemd-resolved[263]: Defaulting to hostname 'linux'. May 14 18:13:08.985876 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:13:08.991227 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:13:09.047594 kernel: SCSI subsystem initialized May 14 18:13:09.057586 kernel: Loading iSCSI transport class v2.0-870. May 14 18:13:09.067583 kernel: iscsi: registered transport (tcp) May 14 18:13:09.089863 kernel: iscsi: registered transport (qla4xxx) May 14 18:13:09.089921 kernel: QLogic iSCSI HBA Driver May 14 18:13:09.110108 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:13:09.143169 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:13:09.144661 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:13:09.197665 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 18:13:09.199390 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 18:13:09.253606 kernel: raid6: avx2x4 gen() 29269 MB/s May 14 18:13:09.270606 kernel: raid6: avx2x2 gen() 30541 MB/s May 14 18:13:09.287727 kernel: raid6: avx2x1 gen() 25417 MB/s May 14 18:13:09.287791 kernel: raid6: using algorithm avx2x2 gen() 30541 MB/s May 14 18:13:09.305738 kernel: raid6: .... xor() 19510 MB/s, rmw enabled May 14 18:13:09.305796 kernel: raid6: using avx2x2 recovery algorithm May 14 18:13:09.326596 kernel: xor: automatically using best checksumming function avx May 14 18:13:09.493606 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 18:13:09.501524 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 18:13:09.504415 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:13:09.535722 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 14 18:13:09.541062 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:13:09.542840 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 18:13:09.566533 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation May 14 18:13:09.597867 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:13:09.601651 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:13:09.673403 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:13:09.678676 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 18:13:09.712582 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 14 18:13:09.734653 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 18:13:09.734833 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 18:13:09.734849 kernel: GPT:9289727 != 19775487 May 14 18:13:09.734861 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 18:13:09.734874 kernel: GPT:9289727 != 19775487 May 14 18:13:09.734887 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 18:13:09.734909 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:13:09.734922 kernel: cryptd: max_cpu_qlen set to 1000 May 14 18:13:09.739336 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 14 18:13:09.744579 kernel: libata version 3.00 loaded. May 14 18:13:09.746598 kernel: AES CTR mode by8 optimization enabled May 14 18:13:09.763040 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:13:09.763221 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:13:09.768434 kernel: ahci 0000:00:1f.2: version 3.0 May 14 18:13:09.796096 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 18:13:09.796114 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 14 18:13:09.796275 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 14 18:13:09.796412 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 18:13:09.796548 kernel: scsi host0: ahci May 14 18:13:09.796722 kernel: scsi host1: ahci May 14 18:13:09.796905 kernel: scsi host2: ahci May 14 18:13:09.797069 kernel: scsi host3: ahci May 14 18:13:09.797207 kernel: scsi host4: ahci May 14 18:13:09.797355 kernel: scsi host5: ahci May 14 18:13:09.797490 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 14 18:13:09.797502 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 14 18:13:09.797516 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 14 18:13:09.797526 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 14 18:13:09.797536 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 14 18:13:09.797546 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 14 18:13:09.767208 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:13:09.774083 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:13:09.789255 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 18:13:09.811521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:13:09.825011 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 18:13:09.843181 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 18:13:09.844622 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 18:13:09.857087 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:13:09.860213 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 18:13:09.892748 disk-uuid[633]: Primary Header is updated. May 14 18:13:09.892748 disk-uuid[633]: Secondary Entries is updated. May 14 18:13:09.892748 disk-uuid[633]: Secondary Header is updated. May 14 18:13:09.896574 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:13:09.901587 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:13:10.104135 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 18:13:10.104227 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 18:13:10.104251 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 18:13:10.105600 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 14 18:13:10.106599 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 14 18:13:10.106620 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 14 18:13:10.107197 kernel: ata3.00: applying bridge limits May 14 18:13:10.108592 kernel: ata3.00: configured for UDMA/100 May 14 18:13:10.108616 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 18:13:10.112613 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 18:13:10.168138 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 14 18:13:10.188351 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 18:13:10.188375 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 14 18:13:10.595194 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 18:13:10.596308 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:13:10.597855 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:13:10.598212 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:13:10.603444 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 18:13:10.631781 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 18:13:10.911607 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 18:13:10.911668 disk-uuid[634]: The operation has completed successfully. May 14 18:13:10.936813 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 18:13:10.936944 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 18:13:10.976033 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 18:13:11.000614 sh[663]: Success May 14 18:13:11.017796 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 18:13:11.017833 kernel: device-mapper: uevent: version 1.0.3 May 14 18:13:11.018938 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 18:13:11.027604 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 14 18:13:11.057942 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 18:13:11.060492 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 18:13:11.076743 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 18:13:11.085116 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 18:13:11.085162 kernel: BTRFS: device fsid dedcf745-d4ff-44ac-b61c-5ec1bad114c7 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (675) May 14 18:13:11.086462 kernel: BTRFS info (device dm-0): first mount of filesystem dedcf745-d4ff-44ac-b61c-5ec1bad114c7 May 14 18:13:11.086483 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 18:13:11.087960 kernel: BTRFS info (device dm-0): using free-space-tree May 14 18:13:11.092146 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 18:13:11.093355 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 18:13:11.094465 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 18:13:11.096501 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 18:13:11.100744 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 18:13:11.131595 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (708) May 14 18:13:11.133856 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:13:11.133885 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:13:11.133896 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:13:11.141584 kernel: BTRFS info (device vda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:13:11.142899 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 18:13:11.147716 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 18:13:11.233102 ignition[755]: Ignition 2.21.0 May 14 18:13:11.233116 ignition[755]: Stage: fetch-offline May 14 18:13:11.233154 ignition[755]: no configs at "/usr/lib/ignition/base.d" May 14 18:13:11.233162 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:13:11.233269 ignition[755]: parsed url from cmdline: "" May 14 18:13:11.233274 ignition[755]: no config URL provided May 14 18:13:11.233281 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" May 14 18:13:11.233291 ignition[755]: no config at "/usr/lib/ignition/user.ign" May 14 18:13:11.239604 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:13:11.233317 ignition[755]: op(1): [started] loading QEMU firmware config module May 14 18:13:11.244921 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:13:11.233324 ignition[755]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 18:13:11.243203 ignition[755]: op(1): [finished] loading QEMU firmware config module May 14 18:13:11.243240 ignition[755]: QEMU firmware config was not found. Ignoring... May 14 18:13:11.290913 systemd-networkd[853]: lo: Link UP May 14 18:13:11.290926 systemd-networkd[853]: lo: Gained carrier May 14 18:13:11.292423 systemd-networkd[853]: Enumeration completed May 14 18:13:11.292646 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:13:11.293811 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:13:11.293816 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:13:11.298918 ignition[755]: parsing config with SHA512: 863d32a7c463bcce094c6f769be85748c9797e50d067ff28ff1b765709a803a4a79936d11116f28d9ed505d1a5538b4bcdca0407c4d3e4a8991debf6ab511fff May 14 18:13:11.295046 systemd-networkd[853]: eth0: Link UP May 14 18:13:11.295050 systemd-networkd[853]: eth0: Gained carrier May 14 18:13:11.295058 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:13:11.295422 systemd[1]: Reached target network.target - Network. May 14 18:13:11.306923 ignition[755]: fetch-offline: fetch-offline passed May 14 18:13:11.306552 unknown[755]: fetched base config from "system" May 14 18:13:11.306974 ignition[755]: Ignition finished successfully May 14 18:13:11.306575 unknown[755]: fetched user config from "qemu" May 14 18:13:11.310117 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:13:11.312536 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 18:13:11.313611 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 18:13:11.315623 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 18:13:11.341381 ignition[857]: Ignition 2.21.0 May 14 18:13:11.341395 ignition[857]: Stage: kargs May 14 18:13:11.341522 ignition[857]: no configs at "/usr/lib/ignition/base.d" May 14 18:13:11.341531 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:13:11.342738 ignition[857]: kargs: kargs passed May 14 18:13:11.342830 ignition[857]: Ignition finished successfully May 14 18:13:11.347788 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 18:13:11.350302 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 18:13:11.391095 ignition[866]: Ignition 2.21.0 May 14 18:13:11.391109 ignition[866]: Stage: disks May 14 18:13:11.391252 ignition[866]: no configs at "/usr/lib/ignition/base.d" May 14 18:13:11.391263 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:13:11.394135 ignition[866]: disks: disks passed May 14 18:13:11.394648 ignition[866]: Ignition finished successfully May 14 18:13:11.398506 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 18:13:11.399920 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 18:13:11.401134 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 18:13:11.401482 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:13:11.406092 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:13:11.406498 systemd[1]: Reached target basic.target - Basic System. May 14 18:13:11.410977 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 18:13:11.439718 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 18:13:11.450165 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 18:13:11.451344 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 18:13:11.555597 kernel: EXT4-fs (vda9): mounted filesystem d6072e19-4548-4806-a012-87bb17c59f4c r/w with ordered data mode. Quota mode: none. May 14 18:13:11.556139 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 18:13:11.557202 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 18:13:11.559295 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:13:11.561529 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 18:13:11.563290 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 18:13:11.563345 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 18:13:11.563373 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:13:11.569762 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 18:13:11.578985 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (885) May 14 18:13:11.574474 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 18:13:11.583680 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:13:11.583700 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:13:11.583714 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:13:11.587807 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:13:11.617397 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory May 14 18:13:11.621931 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory May 14 18:13:11.627208 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory May 14 18:13:11.632166 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory May 14 18:13:11.721198 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 18:13:11.724522 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 18:13:11.727662 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 18:13:11.750602 kernel: BTRFS info (device vda6): last unmount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:13:11.762228 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 18:13:11.777295 ignition[999]: INFO : Ignition 2.21.0 May 14 18:13:11.777295 ignition[999]: INFO : Stage: mount May 14 18:13:11.780381 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:13:11.780381 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:13:11.780381 ignition[999]: INFO : mount: mount passed May 14 18:13:11.780381 ignition[999]: INFO : Ignition finished successfully May 14 18:13:11.782869 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 18:13:11.785721 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 18:13:12.084509 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 18:13:12.086675 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 18:13:12.115593 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1012) May 14 18:13:12.117669 kernel: BTRFS info (device vda6): first mount of filesystem 9b1e3c61-417b-43c0-b064-c7db19a42998 May 14 18:13:12.117699 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 18:13:12.117713 kernel: BTRFS info (device vda6): using free-space-tree May 14 18:13:12.122001 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 18:13:12.154723 ignition[1029]: INFO : Ignition 2.21.0 May 14 18:13:12.155956 ignition[1029]: INFO : Stage: files May 14 18:13:12.155956 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:13:12.155956 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:13:12.158946 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping May 14 18:13:12.160893 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 18:13:12.160893 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 18:13:12.163960 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 18:13:12.163960 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 18:13:12.163960 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 18:13:12.163861 unknown[1029]: wrote ssh authorized keys file for user: core May 14 18:13:12.169774 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 18:13:12.169774 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 14 18:13:12.212645 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 18:13:12.478461 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 18:13:12.480873 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 18:13:12.480873 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 18:13:12.480873 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 18:13:12.480873 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 18:13:12.480873 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:13:12.480873 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 18:13:12.480873 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:13:12.480873 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 18:13:12.497786 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:13:12.497786 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 18:13:12.497786 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:13:12.497786 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:13:12.497786 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:13:12.497786 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 14 18:13:12.896688 systemd-networkd[853]: eth0: Gained IPv6LL May 14 18:13:12.984629 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 18:13:13.347641 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 18:13:13.347641 ignition[1029]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 18:13:13.352228 ignition[1029]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:13:13.356784 ignition[1029]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 18:13:13.356784 ignition[1029]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 18:13:13.356784 ignition[1029]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 18:13:13.362773 ignition[1029]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 18:13:13.362773 ignition[1029]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 18:13:13.362773 ignition[1029]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 18:13:13.362773 ignition[1029]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 14 18:13:13.378152 ignition[1029]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 18:13:13.382350 ignition[1029]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 18:13:13.384257 ignition[1029]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 14 18:13:13.384257 ignition[1029]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 14 18:13:13.384257 ignition[1029]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 14 18:13:13.384257 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 18:13:13.384257 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 18:13:13.384257 ignition[1029]: INFO : files: files passed May 14 18:13:13.384257 ignition[1029]: INFO : Ignition finished successfully May 14 18:13:13.385635 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 18:13:13.387905 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 18:13:13.392714 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 18:13:13.409740 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 18:13:13.409888 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 18:13:13.413247 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory May 14 18:13:13.416235 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:13:13.417878 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 18:13:13.420420 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 18:13:13.423858 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:13:13.424506 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 18:13:13.425711 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 18:13:13.480185 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 18:13:13.480328 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 18:13:13.481304 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 18:13:13.484229 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 18:13:13.484824 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 18:13:13.488232 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 18:13:13.526424 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:13:13.530164 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 18:13:13.559713 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 18:13:13.560461 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:13:13.561032 systemd[1]: Stopped target timers.target - Timer Units. May 14 18:13:13.561405 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 18:13:13.561597 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 18:13:13.569609 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 18:13:13.570027 systemd[1]: Stopped target basic.target - Basic System. May 14 18:13:13.572146 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 18:13:13.574281 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 18:13:13.574629 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 18:13:13.575217 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 18:13:13.575614 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 18:13:13.576133 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 18:13:13.576549 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 18:13:13.577117 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 18:13:13.577494 systemd[1]: Stopped target swap.target - Swaps. May 14 18:13:13.578016 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 18:13:13.578199 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 18:13:13.594659 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 18:13:13.595019 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:13:13.597382 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 18:13:13.597499 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:13:13.599878 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 18:13:13.600019 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 18:13:13.602609 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 18:13:13.602719 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 18:13:13.603229 systemd[1]: Stopped target paths.target - Path Units. May 14 18:13:13.603517 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 18:13:13.607672 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:13:13.609633 systemd[1]: Stopped target slices.target - Slice Units. May 14 18:13:13.612132 systemd[1]: Stopped target sockets.target - Socket Units. May 14 18:13:13.613988 systemd[1]: iscsid.socket: Deactivated successfully. May 14 18:13:13.614090 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 18:13:13.616062 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 18:13:13.616141 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 18:13:13.618677 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 18:13:13.618806 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 18:13:13.619206 systemd[1]: ignition-files.service: Deactivated successfully. May 14 18:13:13.619316 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 18:13:13.624777 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 18:13:13.625923 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 18:13:13.627550 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 18:13:13.627719 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:13:13.630304 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 18:13:13.630408 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 18:13:13.639475 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 18:13:13.639830 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 18:13:13.654575 ignition[1085]: INFO : Ignition 2.21.0 May 14 18:13:13.654575 ignition[1085]: INFO : Stage: umount May 14 18:13:13.654575 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 18:13:13.654575 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 18:13:13.658917 ignition[1085]: INFO : umount: umount passed May 14 18:13:13.658917 ignition[1085]: INFO : Ignition finished successfully May 14 18:13:13.660997 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 18:13:13.661148 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 18:13:13.663621 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 18:13:13.664628 systemd[1]: Stopped target network.target - Network. May 14 18:13:13.665023 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 18:13:13.665089 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 18:13:13.665469 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 18:13:13.665514 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 18:13:13.669052 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 18:13:13.669105 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 18:13:13.671910 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 18:13:13.671956 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 18:13:13.672678 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 18:13:13.675255 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 18:13:13.683717 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 18:13:13.683874 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 18:13:13.688004 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 18:13:13.688258 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 18:13:13.688389 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 18:13:13.692026 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 18:13:13.692731 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 18:13:13.695853 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 18:13:13.695902 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 18:13:13.698229 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 18:13:13.701474 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 18:13:13.701576 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 18:13:13.702172 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 18:13:13.702244 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 18:13:13.707745 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 18:13:13.707818 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 18:13:13.708380 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 18:13:13.708435 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:13:13.713573 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:13:13.715217 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 18:13:13.715304 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 18:13:13.738590 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 18:13:13.738781 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:13:13.744450 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 18:13:13.744495 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 18:13:13.746875 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 18:13:13.746914 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:13:13.747181 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 18:13:13.747228 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 18:13:13.748031 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 18:13:13.748082 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 18:13:13.748854 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 18:13:13.748899 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 18:13:13.750391 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 18:13:13.758795 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 18:13:13.758861 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:13:13.762952 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 18:13:13.763022 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:13:13.766261 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 18:13:13.766321 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:13:13.769733 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 18:13:13.769796 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:13:13.770343 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 18:13:13.770397 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:13:13.777228 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 14 18:13:13.777289 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 14 18:13:13.777346 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 18:13:13.777408 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 18:13:13.777897 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 18:13:13.778040 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 18:13:13.783647 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 18:13:13.783779 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 18:13:13.811696 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 18:13:13.811833 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 18:13:13.812743 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 18:13:13.812999 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 18:13:13.813050 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 18:13:13.818411 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 18:13:13.842958 systemd[1]: Switching root. May 14 18:13:13.885109 systemd-journald[218]: Journal stopped May 14 18:13:14.989335 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). May 14 18:13:14.989393 kernel: SELinux: policy capability network_peer_controls=1 May 14 18:13:14.989407 kernel: SELinux: policy capability open_perms=1 May 14 18:13:14.989418 kernel: SELinux: policy capability extended_socket_class=1 May 14 18:13:14.989429 kernel: SELinux: policy capability always_check_network=0 May 14 18:13:14.989440 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 18:13:14.989451 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 18:13:14.989465 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 18:13:14.989476 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 18:13:14.989487 kernel: SELinux: policy capability userspace_initial_context=0 May 14 18:13:14.989504 kernel: audit: type=1403 audit(1747246394.213:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 18:13:14.989521 systemd[1]: Successfully loaded SELinux policy in 53.365ms. May 14 18:13:14.989543 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.554ms. May 14 18:13:14.989582 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 18:13:14.989596 systemd[1]: Detected virtualization kvm. May 14 18:13:14.989610 systemd[1]: Detected architecture x86-64. May 14 18:13:14.989622 systemd[1]: Detected first boot. May 14 18:13:14.989633 systemd[1]: Initializing machine ID from VM UUID. May 14 18:13:14.989645 zram_generator::config[1130]: No configuration found. May 14 18:13:14.989658 kernel: Guest personality initialized and is inactive May 14 18:13:14.989674 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 18:13:14.989685 kernel: Initialized host personality May 14 18:13:14.989696 kernel: NET: Registered PF_VSOCK protocol family May 14 18:13:14.989707 systemd[1]: Populated /etc with preset unit settings. May 14 18:13:14.989726 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 18:13:14.989738 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 18:13:14.989749 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 18:13:14.989761 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 18:13:14.989777 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 18:13:14.989789 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 18:13:14.989801 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 18:13:14.989814 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 18:13:14.989829 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 18:13:14.989841 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 18:13:14.989854 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 18:13:14.989866 systemd[1]: Created slice user.slice - User and Session Slice. May 14 18:13:14.989878 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 18:13:14.989889 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 18:13:14.989901 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 18:13:14.989914 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 18:13:14.989926 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 18:13:14.989940 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 18:13:14.989952 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 18:13:14.989963 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 18:13:14.989975 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 18:13:14.989987 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 18:13:14.989999 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 18:13:14.990011 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 18:13:14.990023 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 18:13:14.990037 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 18:13:14.990049 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 18:13:14.990061 systemd[1]: Reached target slices.target - Slice Units. May 14 18:13:14.990074 systemd[1]: Reached target swap.target - Swaps. May 14 18:13:14.990086 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 18:13:14.990098 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 18:13:14.990110 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 18:13:14.990122 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 18:13:14.990134 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 18:13:14.990147 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 18:13:14.990167 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 18:13:14.990180 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 18:13:14.990191 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 18:13:14.990204 systemd[1]: Mounting media.mount - External Media Directory... May 14 18:13:14.990216 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:13:14.990228 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 18:13:14.990240 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 18:13:14.990252 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 18:13:14.990266 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 18:13:14.990278 systemd[1]: Reached target machines.target - Containers. May 14 18:13:14.990290 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 18:13:14.990302 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:13:14.990314 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 18:13:14.990326 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 18:13:14.990339 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:13:14.990351 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:13:14.990365 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:13:14.990377 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 18:13:14.990389 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:13:14.990401 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 18:13:14.990413 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 18:13:14.990425 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 18:13:14.990437 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 18:13:14.990449 systemd[1]: Stopped systemd-fsck-usr.service. May 14 18:13:14.990461 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:13:14.990475 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 18:13:14.990487 kernel: loop: module loaded May 14 18:13:14.990498 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 18:13:14.990510 kernel: ACPI: bus type drm_connector registered May 14 18:13:14.990521 kernel: fuse: init (API version 7.41) May 14 18:13:14.990533 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 18:13:14.990545 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 18:13:14.990569 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 18:13:14.990585 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 18:13:14.990597 systemd[1]: verity-setup.service: Deactivated successfully. May 14 18:13:14.990609 systemd[1]: Stopped verity-setup.service. May 14 18:13:14.990621 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:13:14.990634 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 18:13:14.990648 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 18:13:14.990660 systemd[1]: Mounted media.mount - External Media Directory. May 14 18:13:14.990672 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 18:13:14.990702 systemd-journald[1201]: Collecting audit messages is disabled. May 14 18:13:14.990726 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 18:13:14.990738 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 18:13:14.990750 systemd-journald[1201]: Journal started May 14 18:13:14.990772 systemd-journald[1201]: Runtime Journal (/run/log/journal/84c3c5932842412dbcba688e1d54d2f2) is 6M, max 48.5M, 42.4M free. May 14 18:13:14.729653 systemd[1]: Queued start job for default target multi-user.target. May 14 18:13:14.756504 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 18:13:14.756954 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 18:13:14.992803 systemd[1]: Started systemd-journald.service - Journal Service. May 14 18:13:14.993775 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 18:13:14.995238 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 18:13:14.996783 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 18:13:14.996983 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 18:13:14.998418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:13:14.998639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:13:15.000036 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:13:15.000241 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:13:15.001568 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:13:15.001768 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:13:15.003245 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 18:13:15.003441 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 18:13:15.004920 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:13:15.005124 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:13:15.006492 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 18:13:15.007883 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 18:13:15.009424 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 18:13:15.010939 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 18:13:15.023399 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 18:13:15.025945 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 18:13:15.028325 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 18:13:15.029579 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 18:13:15.029670 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 18:13:15.031790 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 18:13:15.033737 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 18:13:15.035727 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:13:15.036858 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 18:13:15.039547 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 18:13:15.041113 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:13:15.042430 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 18:13:15.044463 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:13:15.046694 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 18:13:15.050319 systemd-journald[1201]: Time spent on flushing to /var/log/journal/84c3c5932842412dbcba688e1d54d2f2 is 18.286ms for 1064 entries. May 14 18:13:15.050319 systemd-journald[1201]: System Journal (/var/log/journal/84c3c5932842412dbcba688e1d54d2f2) is 8M, max 195.6M, 187.6M free. May 14 18:13:15.101939 systemd-journald[1201]: Received client request to flush runtime journal. May 14 18:13:15.102007 kernel: loop0: detected capacity change from 0 to 113872 May 14 18:13:15.051789 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 18:13:15.054045 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 18:13:15.056790 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 18:13:15.058893 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 18:13:15.068738 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 18:13:15.070372 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 18:13:15.072499 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 18:13:15.077219 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 18:13:15.088862 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 18:13:15.102632 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. May 14 18:13:15.102645 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. May 14 18:13:15.104985 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 18:13:15.106587 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 18:13:15.111719 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 18:13:15.115755 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 18:13:15.124927 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 18:13:15.134584 kernel: loop1: detected capacity change from 0 to 205544 May 14 18:13:15.156100 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 18:13:15.160230 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 18:13:15.164215 kernel: loop2: detected capacity change from 0 to 146240 May 14 18:13:15.190515 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 14 18:13:15.190538 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 14 18:13:15.193577 kernel: loop3: detected capacity change from 0 to 113872 May 14 18:13:15.197406 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 18:13:15.206574 kernel: loop4: detected capacity change from 0 to 205544 May 14 18:13:15.215601 kernel: loop5: detected capacity change from 0 to 146240 May 14 18:13:15.225625 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 18:13:15.226250 (sd-merge)[1273]: Merged extensions into '/usr'. May 14 18:13:15.232129 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... May 14 18:13:15.232155 systemd[1]: Reloading... May 14 18:13:15.287581 zram_generator::config[1301]: No configuration found. May 14 18:13:15.383008 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 18:13:15.386627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:13:15.467047 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 18:13:15.467429 systemd[1]: Reloading finished in 234 ms. May 14 18:13:15.489923 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 18:13:15.491448 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 18:13:15.508131 systemd[1]: Starting ensure-sysext.service... May 14 18:13:15.510388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 18:13:15.519887 systemd[1]: Reload requested from client PID 1337 ('systemctl') (unit ensure-sysext.service)... May 14 18:13:15.519908 systemd[1]: Reloading... May 14 18:13:15.531380 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 18:13:15.531414 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 18:13:15.531817 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 18:13:15.532062 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 18:13:15.532950 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 18:13:15.533210 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. May 14 18:13:15.533283 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. May 14 18:13:15.537368 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:13:15.537380 systemd-tmpfiles[1338]: Skipping /boot May 14 18:13:15.550673 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. May 14 18:13:15.550689 systemd-tmpfiles[1338]: Skipping /boot May 14 18:13:15.571583 zram_generator::config[1365]: No configuration found. May 14 18:13:15.666923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:13:15.746687 systemd[1]: Reloading finished in 226 ms. May 14 18:13:15.773822 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 18:13:15.798455 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 18:13:15.807113 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:13:15.809630 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 18:13:15.817507 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 18:13:15.821456 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 18:13:15.825740 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 18:13:15.829172 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 18:13:15.833110 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:13:15.833288 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:13:15.839435 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:13:15.843741 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:13:15.846667 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:13:15.848022 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:13:15.848174 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:13:15.848301 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:13:15.849916 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 18:13:15.854423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:13:15.854688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:13:15.856592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:13:15.856855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:13:15.859730 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:13:15.860002 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:13:15.869084 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:13:15.869301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:13:15.870717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:13:15.873036 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:13:15.875826 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:13:15.876504 augenrules[1437]: No rules May 14 18:13:15.877172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:13:15.877665 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:13:15.880287 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 18:13:15.880951 systemd-udevd[1409]: Using default interface naming scheme 'v255'. May 14 18:13:15.886691 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 18:13:15.887935 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:13:15.889769 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:13:15.890035 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:13:15.892077 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 18:13:15.894591 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:13:15.894807 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:13:15.896891 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 18:13:15.898608 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:13:15.898903 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:13:15.900692 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:13:15.900888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:13:15.902478 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 18:13:15.913327 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:13:15.915364 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:13:15.916507 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 18:13:15.918720 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 18:13:15.928198 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 18:13:15.933813 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 18:13:15.936609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 18:13:15.937848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 18:13:15.937956 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 18:13:15.938082 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 18:13:15.938170 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 18:13:15.939218 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 18:13:15.942148 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 18:13:15.945021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 18:13:15.947782 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 18:13:15.959251 systemd[1]: Finished ensure-sysext.service. May 14 18:13:15.959728 augenrules[1455]: /sbin/augenrules: No change May 14 18:13:15.960907 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 18:13:15.961116 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 18:13:15.962893 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 18:13:15.964033 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 18:13:15.965410 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 18:13:15.965645 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 18:13:15.970802 augenrules[1511]: No rules May 14 18:13:15.972185 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:13:15.972487 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:13:15.981531 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 18:13:15.982622 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 18:13:15.982685 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 18:13:15.985138 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 18:13:16.008679 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 18:13:16.057672 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 18:13:16.060178 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 18:13:16.070599 kernel: mousedev: PS/2 mouse device common for all mice May 14 18:13:16.085624 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 18:13:16.106491 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 14 18:13:16.106532 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 14 18:13:16.106916 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 18:13:16.107080 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 18:13:16.114066 kernel: ACPI: button: Power Button [PWRF] May 14 18:13:16.137327 systemd-networkd[1518]: lo: Link UP May 14 18:13:16.137340 systemd-networkd[1518]: lo: Gained carrier May 14 18:13:16.140025 systemd-networkd[1518]: Enumeration completed May 14 18:13:16.140121 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 18:13:16.143669 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 18:13:16.145184 systemd-networkd[1518]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:13:16.145196 systemd-networkd[1518]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 18:13:16.147740 systemd-networkd[1518]: eth0: Link UP May 14 18:13:16.147916 systemd-networkd[1518]: eth0: Gained carrier May 14 18:13:16.147929 systemd-networkd[1518]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 18:13:16.148903 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 18:13:16.154570 systemd-resolved[1407]: Positive Trust Anchors: May 14 18:13:16.154587 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 18:13:16.154618 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 18:13:16.161536 systemd-resolved[1407]: Defaulting to hostname 'linux'. May 14 18:13:16.163641 systemd-networkd[1518]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 18:13:16.169005 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 18:13:16.170327 systemd[1]: Reached target network.target - Network. May 14 18:13:16.171269 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 18:13:16.186615 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 18:13:16.192733 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 18:13:16.194040 systemd[1]: Reached target sysinit.target - System Initialization. May 14 18:13:16.195759 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 18:13:17.595674 systemd-timesyncd[1519]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 18:13:17.595717 systemd-timesyncd[1519]: Initial clock synchronization to Wed 2025-05-14 18:13:17.595574 UTC. May 14 18:13:17.595733 systemd-resolved[1407]: Clock change detected. Flushing caches. May 14 18:13:17.596569 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 18:13:17.597828 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 14 18:13:17.600170 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 18:13:17.601422 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 18:13:17.601454 systemd[1]: Reached target paths.target - Path Units. May 14 18:13:17.602765 systemd[1]: Reached target time-set.target - System Time Set. May 14 18:13:17.605298 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 18:13:17.606470 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 18:13:17.608155 systemd[1]: Reached target timers.target - Timer Units. May 14 18:13:17.609796 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 18:13:17.613420 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 18:13:17.619909 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 18:13:17.621537 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 18:13:17.622933 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 18:13:17.630010 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 18:13:17.633269 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 18:13:17.636537 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 18:13:17.650468 systemd[1]: Reached target sockets.target - Socket Units. May 14 18:13:17.651646 systemd[1]: Reached target basic.target - Basic System. May 14 18:13:17.652909 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 18:13:17.653051 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 18:13:17.656357 systemd[1]: Starting containerd.service - containerd container runtime... May 14 18:13:17.659419 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 18:13:17.663329 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 18:13:17.669411 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 18:13:17.674604 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 18:13:17.677375 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 18:13:17.685462 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 14 18:13:17.691589 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 18:13:17.693892 kernel: kvm_amd: TSC scaling supported May 14 18:13:17.693920 kernel: kvm_amd: Nested Virtualization enabled May 14 18:13:17.693932 kernel: kvm_amd: Nested Paging enabled May 14 18:13:17.693945 kernel: kvm_amd: LBR virtualization supported May 14 18:13:17.702138 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 14 18:13:17.702196 kernel: kvm_amd: Virtual GIF supported May 14 18:13:17.703738 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 18:13:17.712528 jq[1558]: false May 14 18:13:17.712793 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing passwd entry cache May 14 18:13:17.712698 oslogin_cache_refresh[1562]: Refreshing passwd entry cache May 14 18:13:17.713816 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 18:13:17.719747 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 18:13:17.721421 oslogin_cache_refresh[1562]: Failure getting users, quitting May 14 18:13:17.724346 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting users, quitting May 14 18:13:17.724346 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:13:17.724346 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing group entry cache May 14 18:13:17.721436 oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 18:13:17.721479 oslogin_cache_refresh[1562]: Refreshing group entry cache May 14 18:13:17.727611 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting groups, quitting May 14 18:13:17.727611 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:13:17.727601 oslogin_cache_refresh[1562]: Failure getting groups, quitting May 14 18:13:17.727611 oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 18:13:17.733115 extend-filesystems[1560]: Found loop3 May 14 18:13:17.733115 extend-filesystems[1560]: Found loop4 May 14 18:13:17.733115 extend-filesystems[1560]: Found loop5 May 14 18:13:17.733115 extend-filesystems[1560]: Found sr0 May 14 18:13:17.733115 extend-filesystems[1560]: Found vda May 14 18:13:17.733115 extend-filesystems[1560]: Found vda1 May 14 18:13:17.733115 extend-filesystems[1560]: Found vda2 May 14 18:13:17.733115 extend-filesystems[1560]: Found vda3 May 14 18:13:17.733115 extend-filesystems[1560]: Found usr May 14 18:13:17.733115 extend-filesystems[1560]: Found vda4 May 14 18:13:17.733115 extend-filesystems[1560]: Found vda6 May 14 18:13:17.733115 extend-filesystems[1560]: Found vda7 May 14 18:13:17.733115 extend-filesystems[1560]: Found vda9 May 14 18:13:17.733115 extend-filesystems[1560]: Checking size of /dev/vda9 May 14 18:13:17.735311 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 18:13:17.764238 extend-filesystems[1560]: Resized partition /dev/vda9 May 14 18:13:17.737115 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 18:13:17.739835 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 18:13:17.742319 systemd[1]: Starting update-engine.service - Update Engine... May 14 18:13:17.749913 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 18:13:17.766633 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 18:13:17.768904 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 18:13:17.769539 update_engine[1576]: I20250514 18:13:17.768944 1576 main.cc:92] Flatcar Update Engine starting May 14 18:13:17.769797 jq[1579]: true May 14 18:13:17.770052 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 18:13:17.773474 extend-filesystems[1584]: resize2fs 1.47.2 (1-Jan-2025) May 14 18:13:17.770645 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 14 18:13:17.770915 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 14 18:13:17.773000 systemd[1]: motdgen.service: Deactivated successfully. May 14 18:13:17.777401 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 18:13:17.781134 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 18:13:17.781508 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 18:13:17.781760 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 18:13:17.790109 kernel: EDAC MC: Ver: 3.0.0 May 14 18:13:17.797008 (ntainerd)[1587]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 18:13:17.807105 jq[1586]: true May 14 18:13:17.805796 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 18:13:17.816209 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 18:13:17.836601 extend-filesystems[1584]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 18:13:17.836601 extend-filesystems[1584]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 18:13:17.836601 extend-filesystems[1584]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 18:13:17.850960 tar[1585]: linux-amd64/helm May 14 18:13:17.851482 extend-filesystems[1560]: Resized filesystem in /dev/vda9 May 14 18:13:17.839710 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 18:13:17.842181 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 18:13:17.870921 dbus-daemon[1555]: [system] SELinux support is enabled May 14 18:13:17.873229 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 18:13:17.874300 systemd-logind[1574]: Watching system buttons on /dev/input/event2 (Power Button) May 14 18:13:17.874319 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 18:13:17.877693 systemd-logind[1574]: New seat seat0. May 14 18:13:17.884925 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 18:13:17.886138 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 18:13:17.886799 update_engine[1576]: I20250514 18:13:17.886662 1576 update_check_scheduler.cc:74] Next update check in 5m34s May 14 18:13:17.887355 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 18:13:17.887374 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 18:13:17.887721 systemd[1]: Started update-engine.service - Update Engine. May 14 18:13:17.893318 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 18:13:17.893811 systemd[1]: Started systemd-logind.service - User Login Management. May 14 18:13:17.937606 bash[1628]: Updated "/home/core/.ssh/authorized_keys" May 14 18:13:17.938281 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 18:13:17.940463 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 18:13:17.946267 locksmithd[1620]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 18:13:17.967056 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 18:13:18.000765 containerd[1587]: time="2025-05-14T18:13:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 18:13:18.002467 containerd[1587]: time="2025-05-14T18:13:18.002412622Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 18:13:18.010029 containerd[1587]: time="2025-05-14T18:13:18.009993216Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.696µs" May 14 18:13:18.010029 containerd[1587]: time="2025-05-14T18:13:18.010017682Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 18:13:18.010079 containerd[1587]: time="2025-05-14T18:13:18.010034824Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 18:13:18.010238 containerd[1587]: time="2025-05-14T18:13:18.010211716Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 18:13:18.010384 containerd[1587]: time="2025-05-14T18:13:18.010353773Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 18:13:18.010431 containerd[1587]: time="2025-05-14T18:13:18.010385853Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:13:18.010478 containerd[1587]: time="2025-05-14T18:13:18.010453640Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 18:13:18.010478 containerd[1587]: time="2025-05-14T18:13:18.010472235Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:13:18.010742 containerd[1587]: time="2025-05-14T18:13:18.010719239Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 18:13:18.010813 containerd[1587]: time="2025-05-14T18:13:18.010790913Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:13:18.010849 containerd[1587]: time="2025-05-14T18:13:18.010813505Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 18:13:18.010849 containerd[1587]: time="2025-05-14T18:13:18.010823294Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 18:13:18.010913 sshd_keygen[1583]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 18:13:18.011120 containerd[1587]: time="2025-05-14T18:13:18.010914415Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 18:13:18.011176 containerd[1587]: time="2025-05-14T18:13:18.011154515Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:13:18.011256 containerd[1587]: time="2025-05-14T18:13:18.011242290Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 18:13:18.011300 containerd[1587]: time="2025-05-14T18:13:18.011289268Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 18:13:18.011364 containerd[1587]: time="2025-05-14T18:13:18.011350172Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 18:13:18.013018 containerd[1587]: time="2025-05-14T18:13:18.012884431Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 18:13:18.013317 containerd[1587]: time="2025-05-14T18:13:18.013286796Z" level=info msg="metadata content store policy set" policy=shared May 14 18:13:18.018667 containerd[1587]: time="2025-05-14T18:13:18.018573577Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 18:13:18.018667 containerd[1587]: time="2025-05-14T18:13:18.018614383Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 18:13:18.018667 containerd[1587]: time="2025-05-14T18:13:18.018627558Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 18:13:18.018667 containerd[1587]: time="2025-05-14T18:13:18.018639220Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 18:13:18.018771 containerd[1587]: time="2025-05-14T18:13:18.018672502Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 18:13:18.018771 containerd[1587]: time="2025-05-14T18:13:18.018683603Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 18:13:18.018771 containerd[1587]: time="2025-05-14T18:13:18.018695445Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 18:13:18.018771 containerd[1587]: time="2025-05-14T18:13:18.018707548Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 18:13:18.018771 containerd[1587]: time="2025-05-14T18:13:18.018757001Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 18:13:18.018771 containerd[1587]: time="2025-05-14T18:13:18.018768082Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 18:13:18.018881 containerd[1587]: time="2025-05-14T18:13:18.018777028Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 18:13:18.018881 containerd[1587]: time="2025-05-14T18:13:18.018789241Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 18:13:18.018922 containerd[1587]: time="2025-05-14T18:13:18.018896493Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 18:13:18.018941 containerd[1587]: time="2025-05-14T18:13:18.018919917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 18:13:18.018941 containerd[1587]: time="2025-05-14T18:13:18.018933402Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 18:13:18.018974 containerd[1587]: time="2025-05-14T18:13:18.018943140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 18:13:18.018974 containerd[1587]: time="2025-05-14T18:13:18.018955313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 18:13:18.018974 containerd[1587]: time="2025-05-14T18:13:18.018965292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 18:13:18.019029 containerd[1587]: time="2025-05-14T18:13:18.018976162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 18:13:18.019029 containerd[1587]: time="2025-05-14T18:13:18.018986622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 18:13:18.019029 containerd[1587]: time="2025-05-14T18:13:18.018997342Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 18:13:18.019029 containerd[1587]: time="2025-05-14T18:13:18.019009124Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 18:13:18.019029 containerd[1587]: time="2025-05-14T18:13:18.019023140Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 18:13:18.019153 containerd[1587]: time="2025-05-14T18:13:18.019126644Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 18:13:18.019153 containerd[1587]: time="2025-05-14T18:13:18.019144177Z" level=info msg="Start snapshots syncer" May 14 18:13:18.019191 containerd[1587]: time="2025-05-14T18:13:18.019178291Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 18:13:18.019458 containerd[1587]: time="2025-05-14T18:13:18.019425165Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 18:13:18.019562 containerd[1587]: time="2025-05-14T18:13:18.019474086Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 18:13:18.019588 containerd[1587]: time="2025-05-14T18:13:18.019565117Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 18:13:18.019878 containerd[1587]: time="2025-05-14T18:13:18.019860802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 18:13:18.019907 containerd[1587]: time="2025-05-14T18:13:18.019891740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 18:13:18.019907 containerd[1587]: time="2025-05-14T18:13:18.019903382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 18:13:18.019951 containerd[1587]: time="2025-05-14T18:13:18.019914843Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 18:13:18.019951 containerd[1587]: time="2025-05-14T18:13:18.019927477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 18:13:18.019951 containerd[1587]: time="2025-05-14T18:13:18.019938297Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 18:13:18.019951 containerd[1587]: time="2025-05-14T18:13:18.019949088Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 18:13:18.020019 containerd[1587]: time="2025-05-14T18:13:18.019970157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 18:13:18.020019 containerd[1587]: time="2025-05-14T18:13:18.019982631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 18:13:18.020019 containerd[1587]: time="2025-05-14T18:13:18.019993761Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 18:13:18.021343 containerd[1587]: time="2025-05-14T18:13:18.020745442Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:13:18.021343 containerd[1587]: time="2025-05-14T18:13:18.020769026Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 18:13:18.021343 containerd[1587]: time="2025-05-14T18:13:18.020778394Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:13:18.021343 containerd[1587]: time="2025-05-14T18:13:18.020787180Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 18:13:18.021343 containerd[1587]: time="2025-05-14T18:13:18.020795235Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 18:13:18.021343 containerd[1587]: time="2025-05-14T18:13:18.020804673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 18:13:18.021343 containerd[1587]: time="2025-05-14T18:13:18.020814922Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 18:13:18.021343 containerd[1587]: time="2025-05-14T18:13:18.020833076Z" level=info msg="runtime interface created" May 14 18:13:18.021343 containerd[1587]: time="2025-05-14T18:13:18.020837895Z" level=info msg="created NRI interface" May 14 18:13:18.021343 containerd[1587]: time="2025-05-14T18:13:18.020846371Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 18:13:18.021343 containerd[1587]: time="2025-05-14T18:13:18.020857101Z" level=info msg="Connect containerd service" May 14 18:13:18.021343 containerd[1587]: time="2025-05-14T18:13:18.020880175Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 18:13:18.021628 containerd[1587]: time="2025-05-14T18:13:18.021601749Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:13:18.036271 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 18:13:18.039395 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 18:13:18.054731 systemd[1]: issuegen.service: Deactivated successfully. May 14 18:13:18.055011 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 18:13:18.059670 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 18:13:18.082563 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 18:13:18.086724 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 18:13:18.090996 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 18:13:18.092913 systemd[1]: Reached target getty.target - Login Prompts. May 14 18:13:18.115818 containerd[1587]: time="2025-05-14T18:13:18.115771251Z" level=info msg="Start subscribing containerd event" May 14 18:13:18.115872 containerd[1587]: time="2025-05-14T18:13:18.115822808Z" level=info msg="Start recovering state" May 14 18:13:18.115935 containerd[1587]: time="2025-05-14T18:13:18.115913839Z" level=info msg="Start event monitor" May 14 18:13:18.115935 containerd[1587]: time="2025-05-14T18:13:18.115932804Z" level=info msg="Start cni network conf syncer for default" May 14 18:13:18.115996 containerd[1587]: time="2025-05-14T18:13:18.115940348Z" level=info msg="Start streaming server" May 14 18:13:18.115996 containerd[1587]: time="2025-05-14T18:13:18.115949496Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 18:13:18.115996 containerd[1587]: time="2025-05-14T18:13:18.115955908Z" level=info msg="runtime interface starting up..." May 14 18:13:18.116311 containerd[1587]: time="2025-05-14T18:13:18.116291417Z" level=info msg="starting plugins..." May 14 18:13:18.116355 containerd[1587]: time="2025-05-14T18:13:18.116324058Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 18:13:18.116499 containerd[1587]: time="2025-05-14T18:13:18.116262282Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 18:13:18.116499 containerd[1587]: time="2025-05-14T18:13:18.116458380Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 18:13:18.117162 containerd[1587]: time="2025-05-14T18:13:18.117143075Z" level=info msg="containerd successfully booted in 0.116959s" May 14 18:13:18.117204 systemd[1]: Started containerd.service - containerd container runtime. May 14 18:13:18.257463 tar[1585]: linux-amd64/LICENSE May 14 18:13:18.257463 tar[1585]: linux-amd64/README.md May 14 18:13:18.286871 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 18:13:18.968338 systemd-networkd[1518]: eth0: Gained IPv6LL May 14 18:13:18.971689 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 18:13:18.973601 systemd[1]: Reached target network-online.target - Network is Online. May 14 18:13:18.976738 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 18:13:18.979518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:13:18.993440 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 18:13:19.021376 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 18:13:19.023411 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 18:13:19.023682 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 18:13:19.026741 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 18:13:19.642517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:13:19.644414 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 18:13:19.645896 systemd[1]: Startup finished in 2.926s (kernel) + 5.567s (initrd) + 4.084s (userspace) = 12.578s. May 14 18:13:19.649783 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:13:20.087397 kubelet[1695]: E0514 18:13:20.087266 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:13:20.091170 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:13:20.091373 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:13:20.091728 systemd[1]: kubelet.service: Consumed 913ms CPU time, 235.8M memory peak. May 14 18:13:23.481213 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 18:13:23.482470 systemd[1]: Started sshd@0-10.0.0.145:22-10.0.0.1:52868.service - OpenSSH per-connection server daemon (10.0.0.1:52868). May 14 18:13:23.682482 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 52868 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:13:23.684850 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:13:23.692786 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 18:13:23.694184 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 18:13:23.700519 systemd-logind[1574]: New session 1 of user core. May 14 18:13:23.719456 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 18:13:23.723082 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 18:13:23.739636 (systemd)[1712]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 18:13:23.742537 systemd-logind[1574]: New session c1 of user core. May 14 18:13:23.889189 systemd[1712]: Queued start job for default target default.target. May 14 18:13:23.912408 systemd[1712]: Created slice app.slice - User Application Slice. May 14 18:13:23.912435 systemd[1712]: Reached target paths.target - Paths. May 14 18:13:23.912475 systemd[1712]: Reached target timers.target - Timers. May 14 18:13:23.914071 systemd[1712]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 18:13:23.925179 systemd[1712]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 18:13:23.925329 systemd[1712]: Reached target sockets.target - Sockets. May 14 18:13:23.925380 systemd[1712]: Reached target basic.target - Basic System. May 14 18:13:23.925431 systemd[1712]: Reached target default.target - Main User Target. May 14 18:13:23.925470 systemd[1712]: Startup finished in 176ms. May 14 18:13:23.925984 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 18:13:23.927759 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 18:13:23.995016 systemd[1]: Started sshd@1-10.0.0.145:22-10.0.0.1:52884.service - OpenSSH per-connection server daemon (10.0.0.1:52884). May 14 18:13:24.054463 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 52884 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:13:24.055961 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:13:24.060727 systemd-logind[1574]: New session 2 of user core. May 14 18:13:24.071294 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 18:13:24.125705 sshd[1725]: Connection closed by 10.0.0.1 port 52884 May 14 18:13:24.126077 sshd-session[1723]: pam_unix(sshd:session): session closed for user core May 14 18:13:24.138868 systemd[1]: sshd@1-10.0.0.145:22-10.0.0.1:52884.service: Deactivated successfully. May 14 18:13:24.140811 systemd[1]: session-2.scope: Deactivated successfully. May 14 18:13:24.141630 systemd-logind[1574]: Session 2 logged out. Waiting for processes to exit. May 14 18:13:24.144906 systemd[1]: Started sshd@2-10.0.0.145:22-10.0.0.1:52896.service - OpenSSH per-connection server daemon (10.0.0.1:52896). May 14 18:13:24.145614 systemd-logind[1574]: Removed session 2. May 14 18:13:24.193769 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 52896 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:13:24.195447 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:13:24.200748 systemd-logind[1574]: New session 3 of user core. May 14 18:13:24.210252 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 18:13:24.260368 sshd[1733]: Connection closed by 10.0.0.1 port 52896 May 14 18:13:24.260682 sshd-session[1731]: pam_unix(sshd:session): session closed for user core May 14 18:13:24.277972 systemd[1]: sshd@2-10.0.0.145:22-10.0.0.1:52896.service: Deactivated successfully. May 14 18:13:24.279786 systemd[1]: session-3.scope: Deactivated successfully. May 14 18:13:24.280568 systemd-logind[1574]: Session 3 logged out. Waiting for processes to exit. May 14 18:13:24.283221 systemd[1]: Started sshd@3-10.0.0.145:22-10.0.0.1:52910.service - OpenSSH per-connection server daemon (10.0.0.1:52910). May 14 18:13:24.283819 systemd-logind[1574]: Removed session 3. May 14 18:13:24.334381 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 52910 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:13:24.336350 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:13:24.341587 systemd-logind[1574]: New session 4 of user core. May 14 18:13:24.351318 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 18:13:24.406207 sshd[1741]: Connection closed by 10.0.0.1 port 52910 May 14 18:13:24.406562 sshd-session[1739]: pam_unix(sshd:session): session closed for user core May 14 18:13:24.425727 systemd[1]: sshd@3-10.0.0.145:22-10.0.0.1:52910.service: Deactivated successfully. May 14 18:13:24.427809 systemd[1]: session-4.scope: Deactivated successfully. May 14 18:13:24.428641 systemd-logind[1574]: Session 4 logged out. Waiting for processes to exit. May 14 18:13:24.432398 systemd[1]: Started sshd@4-10.0.0.145:22-10.0.0.1:52920.service - OpenSSH per-connection server daemon (10.0.0.1:52920). May 14 18:13:24.433123 systemd-logind[1574]: Removed session 4. May 14 18:13:24.482724 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 52920 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:13:24.484298 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:13:24.488803 systemd-logind[1574]: New session 5 of user core. May 14 18:13:24.502351 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 18:13:24.560477 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 18:13:24.560823 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:13:24.585345 sudo[1750]: pam_unix(sudo:session): session closed for user root May 14 18:13:24.587075 sshd[1749]: Connection closed by 10.0.0.1 port 52920 May 14 18:13:24.587382 sshd-session[1747]: pam_unix(sshd:session): session closed for user core May 14 18:13:24.597656 systemd[1]: sshd@4-10.0.0.145:22-10.0.0.1:52920.service: Deactivated successfully. May 14 18:13:24.599406 systemd[1]: session-5.scope: Deactivated successfully. May 14 18:13:24.600104 systemd-logind[1574]: Session 5 logged out. Waiting for processes to exit. May 14 18:13:24.602780 systemd[1]: Started sshd@5-10.0.0.145:22-10.0.0.1:52928.service - OpenSSH per-connection server daemon (10.0.0.1:52928). May 14 18:13:24.603312 systemd-logind[1574]: Removed session 5. May 14 18:13:24.652671 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 52928 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:13:24.654116 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:13:24.659299 systemd-logind[1574]: New session 6 of user core. May 14 18:13:24.669302 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 18:13:24.724741 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 18:13:24.725153 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:13:25.081820 sudo[1761]: pam_unix(sudo:session): session closed for user root May 14 18:13:25.088649 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 18:13:25.088960 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:13:25.098963 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 18:13:25.144118 augenrules[1783]: No rules May 14 18:13:25.146131 systemd[1]: audit-rules.service: Deactivated successfully. May 14 18:13:25.146433 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 18:13:25.147596 sudo[1760]: pam_unix(sudo:session): session closed for user root May 14 18:13:25.149136 sshd[1759]: Connection closed by 10.0.0.1 port 52928 May 14 18:13:25.149764 sshd-session[1756]: pam_unix(sshd:session): session closed for user core May 14 18:13:25.158684 systemd[1]: sshd@5-10.0.0.145:22-10.0.0.1:52928.service: Deactivated successfully. May 14 18:13:25.160350 systemd[1]: session-6.scope: Deactivated successfully. May 14 18:13:25.161194 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. May 14 18:13:25.163905 systemd[1]: Started sshd@6-10.0.0.145:22-10.0.0.1:52938.service - OpenSSH per-connection server daemon (10.0.0.1:52938). May 14 18:13:25.164701 systemd-logind[1574]: Removed session 6. May 14 18:13:25.211335 sshd[1792]: Accepted publickey for core from 10.0.0.1 port 52938 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:13:25.212963 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:13:25.217549 systemd-logind[1574]: New session 7 of user core. May 14 18:13:25.227256 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 18:13:25.281584 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 18:13:25.281921 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 18:13:25.608256 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 18:13:25.629663 (dockerd)[1815]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 18:13:25.874236 dockerd[1815]: time="2025-05-14T18:13:25.874054663Z" level=info msg="Starting up" May 14 18:13:25.875750 dockerd[1815]: time="2025-05-14T18:13:25.875693849Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 18:13:25.946018 dockerd[1815]: time="2025-05-14T18:13:25.945949238Z" level=info msg="Loading containers: start." May 14 18:13:25.959116 kernel: Initializing XFRM netlink socket May 14 18:13:26.205171 systemd-networkd[1518]: docker0: Link UP May 14 18:13:26.212408 dockerd[1815]: time="2025-05-14T18:13:26.212374907Z" level=info msg="Loading containers: done." May 14 18:13:26.225506 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck439577339-merged.mount: Deactivated successfully. May 14 18:13:26.227224 dockerd[1815]: time="2025-05-14T18:13:26.227183745Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 18:13:26.227296 dockerd[1815]: time="2025-05-14T18:13:26.227277060Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 18:13:26.227417 dockerd[1815]: time="2025-05-14T18:13:26.227391094Z" level=info msg="Initializing buildkit" May 14 18:13:26.257029 dockerd[1815]: time="2025-05-14T18:13:26.256980887Z" level=info msg="Completed buildkit initialization" May 14 18:13:26.260869 dockerd[1815]: time="2025-05-14T18:13:26.260843956Z" level=info msg="Daemon has completed initialization" May 14 18:13:26.260947 dockerd[1815]: time="2025-05-14T18:13:26.260900943Z" level=info msg="API listen on /run/docker.sock" May 14 18:13:26.261041 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 18:13:26.922804 containerd[1587]: time="2025-05-14T18:13:26.922763801Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 18:13:27.551924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1109030895.mount: Deactivated successfully. May 14 18:13:28.506067 containerd[1587]: time="2025-05-14T18:13:28.505991564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:28.519681 containerd[1587]: time="2025-05-14T18:13:28.519625778Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 14 18:13:28.531441 containerd[1587]: time="2025-05-14T18:13:28.531403047Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:28.565179 containerd[1587]: time="2025-05-14T18:13:28.565070452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:28.566255 containerd[1587]: time="2025-05-14T18:13:28.566217604Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.643413909s" May 14 18:13:28.566309 containerd[1587]: time="2025-05-14T18:13:28.566265724Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 14 18:13:28.567905 containerd[1587]: time="2025-05-14T18:13:28.567871347Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 18:13:29.714806 containerd[1587]: time="2025-05-14T18:13:29.714734534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:29.715606 containerd[1587]: time="2025-05-14T18:13:29.715576454Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 14 18:13:29.716628 containerd[1587]: time="2025-05-14T18:13:29.716598682Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:29.719070 containerd[1587]: time="2025-05-14T18:13:29.719034212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:29.719803 containerd[1587]: time="2025-05-14T18:13:29.719762278Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.151854994s" May 14 18:13:29.719803 containerd[1587]: time="2025-05-14T18:13:29.719789118Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 14 18:13:29.720221 containerd[1587]: time="2025-05-14T18:13:29.720195240Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 18:13:30.278403 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 18:13:30.279964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:13:30.492046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:13:30.503472 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:13:30.786655 kubelet[2088]: E0514 18:13:30.786607 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:13:30.794107 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:13:30.794307 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:13:30.794708 systemd[1]: kubelet.service: Consumed 202ms CPU time, 95.9M memory peak. May 14 18:13:32.424728 containerd[1587]: time="2025-05-14T18:13:32.424651468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:32.425619 containerd[1587]: time="2025-05-14T18:13:32.425555274Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 14 18:13:32.426912 containerd[1587]: time="2025-05-14T18:13:32.426881703Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:32.429731 containerd[1587]: time="2025-05-14T18:13:32.429701555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:32.430944 containerd[1587]: time="2025-05-14T18:13:32.430898901Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.71067648s" May 14 18:13:32.430944 containerd[1587]: time="2025-05-14T18:13:32.430930941Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 14 18:13:32.431610 containerd[1587]: time="2025-05-14T18:13:32.431499809Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 18:13:34.152336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3847240225.mount: Deactivated successfully. May 14 18:13:34.436173 containerd[1587]: time="2025-05-14T18:13:34.436030367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:34.436985 containerd[1587]: time="2025-05-14T18:13:34.436953739Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 14 18:13:34.438134 containerd[1587]: time="2025-05-14T18:13:34.438101223Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:34.439994 containerd[1587]: time="2025-05-14T18:13:34.439930345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:34.440520 containerd[1587]: time="2025-05-14T18:13:34.440453216Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.00891755s" May 14 18:13:34.440520 containerd[1587]: time="2025-05-14T18:13:34.440484755Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 14 18:13:34.440962 containerd[1587]: time="2025-05-14T18:13:34.440897580Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 18:13:35.605242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1942756118.mount: Deactivated successfully. May 14 18:13:36.744613 containerd[1587]: time="2025-05-14T18:13:36.744534209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:36.745472 containerd[1587]: time="2025-05-14T18:13:36.745436762Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 14 18:13:36.746951 containerd[1587]: time="2025-05-14T18:13:36.746911359Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:36.749492 containerd[1587]: time="2025-05-14T18:13:36.749444072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:36.750308 containerd[1587]: time="2025-05-14T18:13:36.750243772Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.309317419s" May 14 18:13:36.750308 containerd[1587]: time="2025-05-14T18:13:36.750281824Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 18:13:36.750875 containerd[1587]: time="2025-05-14T18:13:36.750831615Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 18:13:37.628744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2917675301.mount: Deactivated successfully. May 14 18:13:37.635401 containerd[1587]: time="2025-05-14T18:13:37.635345342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:13:37.636196 containerd[1587]: time="2025-05-14T18:13:37.636148248Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 18:13:37.638489 containerd[1587]: time="2025-05-14T18:13:37.638445459Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:13:37.640441 containerd[1587]: time="2025-05-14T18:13:37.640389978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 18:13:37.641012 containerd[1587]: time="2025-05-14T18:13:37.640973262Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 890.110399ms" May 14 18:13:37.641012 containerd[1587]: time="2025-05-14T18:13:37.641008178Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 18:13:37.641551 containerd[1587]: time="2025-05-14T18:13:37.641485223Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 18:13:38.141406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3835599307.mount: Deactivated successfully. May 14 18:13:39.759475 containerd[1587]: time="2025-05-14T18:13:39.759401621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:39.760264 containerd[1587]: time="2025-05-14T18:13:39.760190301Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 14 18:13:39.763111 containerd[1587]: time="2025-05-14T18:13:39.761963578Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:39.766990 containerd[1587]: time="2025-05-14T18:13:39.766936480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:39.768170 containerd[1587]: time="2025-05-14T18:13:39.768127515Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.126608819s" May 14 18:13:39.768213 containerd[1587]: time="2025-05-14T18:13:39.768169904Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 14 18:13:41.028527 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 18:13:41.030311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:13:41.223317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:13:41.241671 (kubelet)[2241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 18:13:41.368356 kubelet[2241]: E0514 18:13:41.368197 2241 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 18:13:41.372736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 18:13:41.372969 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 18:13:41.373442 systemd[1]: kubelet.service: Consumed 196ms CPU time, 96.3M memory peak. May 14 18:13:42.557445 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:13:42.557646 systemd[1]: kubelet.service: Consumed 196ms CPU time, 96.3M memory peak. May 14 18:13:42.560513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:13:42.588003 systemd[1]: Reload requested from client PID 2258 ('systemctl') (unit session-7.scope)... May 14 18:13:42.588024 systemd[1]: Reloading... May 14 18:13:42.690131 zram_generator::config[2309]: No configuration found. May 14 18:13:42.920833 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:13:43.037436 systemd[1]: Reloading finished in 448 ms. May 14 18:13:43.100065 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 18:13:43.100213 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 18:13:43.100541 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:13:43.100593 systemd[1]: kubelet.service: Consumed 144ms CPU time, 83.5M memory peak. May 14 18:13:43.102402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:13:43.263861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:13:43.274504 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:13:43.313539 kubelet[2348]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:13:43.313539 kubelet[2348]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:13:43.313539 kubelet[2348]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:13:43.314052 kubelet[2348]: I0514 18:13:43.313599 2348 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:13:43.629867 kubelet[2348]: I0514 18:13:43.629814 2348 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 18:13:43.629867 kubelet[2348]: I0514 18:13:43.629847 2348 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:13:43.630106 kubelet[2348]: I0514 18:13:43.630077 2348 server.go:929] "Client rotation is on, will bootstrap in background" May 14 18:13:43.651399 kubelet[2348]: I0514 18:13:43.651279 2348 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:13:43.651399 kubelet[2348]: E0514 18:13:43.651329 2348 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:43.658346 kubelet[2348]: I0514 18:13:43.658314 2348 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:13:43.664473 kubelet[2348]: I0514 18:13:43.664438 2348 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:13:43.664632 kubelet[2348]: I0514 18:13:43.664593 2348 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 18:13:43.664825 kubelet[2348]: I0514 18:13:43.664787 2348 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:13:43.664996 kubelet[2348]: I0514 18:13:43.664818 2348 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:13:43.665154 kubelet[2348]: I0514 18:13:43.665005 2348 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:13:43.665154 kubelet[2348]: I0514 18:13:43.665013 2348 container_manager_linux.go:300] "Creating device plugin manager" May 14 18:13:43.665154 kubelet[2348]: I0514 18:13:43.665152 2348 state_mem.go:36] "Initialized new in-memory state store" May 14 18:13:43.666585 kubelet[2348]: I0514 18:13:43.666560 2348 kubelet.go:408] "Attempting to sync node with API server" May 14 18:13:43.666585 kubelet[2348]: I0514 18:13:43.666579 2348 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:13:43.666661 kubelet[2348]: I0514 18:13:43.666615 2348 kubelet.go:314] "Adding apiserver pod source" May 14 18:13:43.666661 kubelet[2348]: I0514 18:13:43.666634 2348 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:13:43.671186 kubelet[2348]: W0514 18:13:43.671121 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused May 14 18:13:43.671253 kubelet[2348]: E0514 18:13:43.671192 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:43.672193 kubelet[2348]: W0514 18:13:43.672069 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused May 14 18:13:43.672193 kubelet[2348]: E0514 18:13:43.672134 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:43.673054 kubelet[2348]: I0514 18:13:43.673027 2348 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:13:43.674557 kubelet[2348]: I0514 18:13:43.674533 2348 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:13:43.675001 kubelet[2348]: W0514 18:13:43.674981 2348 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 18:13:43.675736 kubelet[2348]: I0514 18:13:43.675719 2348 server.go:1269] "Started kubelet" May 14 18:13:43.675844 kubelet[2348]: I0514 18:13:43.675794 2348 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:13:43.675990 kubelet[2348]: I0514 18:13:43.675943 2348 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:13:43.676311 kubelet[2348]: I0514 18:13:43.676289 2348 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:13:43.677110 kubelet[2348]: I0514 18:13:43.676838 2348 server.go:460] "Adding debug handlers to kubelet server" May 14 18:13:43.677818 kubelet[2348]: I0514 18:13:43.677802 2348 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:13:43.678586 kubelet[2348]: I0514 18:13:43.678567 2348 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:13:43.679487 kubelet[2348]: E0514 18:13:43.679462 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:43.679540 kubelet[2348]: I0514 18:13:43.679498 2348 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 18:13:43.679642 kubelet[2348]: I0514 18:13:43.679628 2348 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 18:13:43.679684 kubelet[2348]: I0514 18:13:43.679673 2348 reconciler.go:26] "Reconciler: start to sync state" May 14 18:13:43.680075 kubelet[2348]: W0514 18:13:43.679965 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused May 14 18:13:43.680075 kubelet[2348]: E0514 18:13:43.680008 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:43.680263 kubelet[2348]: E0514 18:13:43.680235 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="200ms" May 14 18:13:43.681228 kubelet[2348]: I0514 18:13:43.680786 2348 factory.go:221] Registration of the systemd container factory successfully May 14 18:13:43.681228 kubelet[2348]: I0514 18:13:43.680861 2348 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:13:43.682147 kubelet[2348]: E0514 18:13:43.681792 2348 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:13:43.682326 kubelet[2348]: E0514 18:13:43.679603 2348 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.145:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.145:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f775d7242ef4d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 18:13:43.675694925 +0000 UTC m=+0.397159523,LastTimestamp:2025-05-14 18:13:43.675694925 +0000 UTC m=+0.397159523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 18:13:43.682442 kubelet[2348]: I0514 18:13:43.682340 2348 factory.go:221] Registration of the containerd container factory successfully May 14 18:13:43.694626 kubelet[2348]: I0514 18:13:43.694568 2348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:13:43.695870 kubelet[2348]: I0514 18:13:43.695839 2348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:13:43.695915 kubelet[2348]: I0514 18:13:43.695891 2348 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:13:43.695915 kubelet[2348]: I0514 18:13:43.695912 2348 kubelet.go:2321] "Starting kubelet main sync loop" May 14 18:13:43.695972 kubelet[2348]: E0514 18:13:43.695949 2348 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:13:43.697638 kubelet[2348]: W0514 18:13:43.697498 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused May 14 18:13:43.697638 kubelet[2348]: E0514 18:13:43.697571 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:43.699143 kubelet[2348]: I0514 18:13:43.698931 2348 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:13:43.699143 kubelet[2348]: I0514 18:13:43.698950 2348 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:13:43.699143 kubelet[2348]: I0514 18:13:43.698970 2348 state_mem.go:36] "Initialized new in-memory state store" May 14 18:13:43.779754 kubelet[2348]: E0514 18:13:43.779682 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:43.796414 kubelet[2348]: E0514 18:13:43.796345 2348 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 18:13:43.880909 kubelet[2348]: E0514 18:13:43.880757 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:43.881280 kubelet[2348]: E0514 18:13:43.881052 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="400ms" May 14 18:13:43.943119 kubelet[2348]: I0514 18:13:43.943018 2348 policy_none.go:49] "None policy: Start" May 14 18:13:43.943925 kubelet[2348]: I0514 18:13:43.943899 2348 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:13:43.944001 kubelet[2348]: I0514 18:13:43.943928 2348 state_mem.go:35] "Initializing new in-memory state store" May 14 18:13:43.955618 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 18:13:43.978676 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 18:13:43.981688 kubelet[2348]: E0514 18:13:43.981648 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:43.982665 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 18:13:43.996511 kubelet[2348]: E0514 18:13:43.996450 2348 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 18:13:44.001421 kubelet[2348]: I0514 18:13:44.001384 2348 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:13:44.001657 kubelet[2348]: I0514 18:13:44.001646 2348 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:13:44.001703 kubelet[2348]: I0514 18:13:44.001656 2348 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:13:44.001986 kubelet[2348]: I0514 18:13:44.001922 2348 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:13:44.003354 kubelet[2348]: E0514 18:13:44.003328 2348 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 18:13:44.103786 kubelet[2348]: I0514 18:13:44.103746 2348 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:13:44.104283 kubelet[2348]: E0514 18:13:44.104226 2348 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" May 14 18:13:44.282048 kubelet[2348]: E0514 18:13:44.281984 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="800ms" May 14 18:13:44.306510 kubelet[2348]: I0514 18:13:44.306484 2348 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:13:44.306926 kubelet[2348]: E0514 18:13:44.306874 2348 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" May 14 18:13:44.405761 systemd[1]: Created slice kubepods-burstable-pod3a673eff765630e384100cb47c695ff5.slice - libcontainer container kubepods-burstable-pod3a673eff765630e384100cb47c695ff5.slice. May 14 18:13:44.441619 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 14 18:13:44.454127 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 14 18:13:44.486410 kubelet[2348]: I0514 18:13:44.486336 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a673eff765630e384100cb47c695ff5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3a673eff765630e384100cb47c695ff5\") " pod="kube-system/kube-apiserver-localhost" May 14 18:13:44.486410 kubelet[2348]: I0514 18:13:44.486393 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:13:44.486410 kubelet[2348]: I0514 18:13:44.486420 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:13:44.486958 kubelet[2348]: I0514 18:13:44.486445 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:13:44.486958 kubelet[2348]: I0514 18:13:44.486466 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a673eff765630e384100cb47c695ff5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3a673eff765630e384100cb47c695ff5\") " pod="kube-system/kube-apiserver-localhost" May 14 18:13:44.486958 kubelet[2348]: I0514 18:13:44.486485 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a673eff765630e384100cb47c695ff5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3a673eff765630e384100cb47c695ff5\") " pod="kube-system/kube-apiserver-localhost" May 14 18:13:44.486958 kubelet[2348]: I0514 18:13:44.486505 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:13:44.486958 kubelet[2348]: I0514 18:13:44.486523 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:13:44.487131 kubelet[2348]: I0514 18:13:44.486543 2348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 18:13:44.708726 kubelet[2348]: I0514 18:13:44.708605 2348 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:13:44.709225 kubelet[2348]: E0514 18:13:44.709154 2348 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" May 14 18:13:44.740111 containerd[1587]: time="2025-05-14T18:13:44.740038178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3a673eff765630e384100cb47c695ff5,Namespace:kube-system,Attempt:0,}" May 14 18:13:44.752961 containerd[1587]: time="2025-05-14T18:13:44.752902517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 14 18:13:44.757665 containerd[1587]: time="2025-05-14T18:13:44.757622163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 14 18:13:45.003123 kubelet[2348]: W0514 18:13:45.002932 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused May 14 18:13:45.003123 kubelet[2348]: E0514 18:13:45.003012 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:45.036984 kubelet[2348]: W0514 18:13:45.036924 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused May 14 18:13:45.036984 kubelet[2348]: E0514 18:13:45.036978 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:45.039581 kubelet[2348]: W0514 18:13:45.039523 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused May 14 18:13:45.039581 kubelet[2348]: E0514 18:13:45.039573 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:45.083585 kubelet[2348]: E0514 18:13:45.083533 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="1.6s" May 14 18:13:45.119580 containerd[1587]: time="2025-05-14T18:13:45.119518826Z" level=info msg="connecting to shim bfd836886d420c11586bd0d4066d1e8889845143df00f296e27beb7fb3b682e5" address="unix:///run/containerd/s/1caa8fd0a89a5927934aa6f64634ce4b5afb1aeafd740f86813ed7f503193bcf" namespace=k8s.io protocol=ttrpc version=3 May 14 18:13:45.123627 containerd[1587]: time="2025-05-14T18:13:45.123559148Z" level=info msg="connecting to shim c4afdccfbe97d2368941a36113a2eca885dcf3d685e6bc407b00c6eb03f50d01" address="unix:///run/containerd/s/84ac9023b2df1d902ece379906e55ab1391b01a6a66c4305b4f161596b2f3e88" namespace=k8s.io protocol=ttrpc version=3 May 14 18:13:45.128192 containerd[1587]: time="2025-05-14T18:13:45.128138401Z" level=info msg="connecting to shim 1c59cbdc2d325330750e673ec4b229dcbdef9dbfed0e61334e752c6a86c1ba10" address="unix:///run/containerd/s/944fc1df5cc83a8452bf8b966319b0d3448f108381d52f08bdb9a8f48170fd27" namespace=k8s.io protocol=ttrpc version=3 May 14 18:13:45.162364 systemd[1]: Started cri-containerd-c4afdccfbe97d2368941a36113a2eca885dcf3d685e6bc407b00c6eb03f50d01.scope - libcontainer container c4afdccfbe97d2368941a36113a2eca885dcf3d685e6bc407b00c6eb03f50d01. May 14 18:13:45.167777 systemd[1]: Started cri-containerd-1c59cbdc2d325330750e673ec4b229dcbdef9dbfed0e61334e752c6a86c1ba10.scope - libcontainer container 1c59cbdc2d325330750e673ec4b229dcbdef9dbfed0e61334e752c6a86c1ba10. May 14 18:13:45.170258 systemd[1]: Started cri-containerd-bfd836886d420c11586bd0d4066d1e8889845143df00f296e27beb7fb3b682e5.scope - libcontainer container bfd836886d420c11586bd0d4066d1e8889845143df00f296e27beb7fb3b682e5. May 14 18:13:45.176250 kubelet[2348]: W0514 18:13:45.176179 2348 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused May 14 18:13:45.176250 kubelet[2348]: E0514 18:13:45.176249 2348 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" May 14 18:13:45.214762 containerd[1587]: time="2025-05-14T18:13:45.214704596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4afdccfbe97d2368941a36113a2eca885dcf3d685e6bc407b00c6eb03f50d01\"" May 14 18:13:45.216944 containerd[1587]: time="2025-05-14T18:13:45.216906578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfd836886d420c11586bd0d4066d1e8889845143df00f296e27beb7fb3b682e5\"" May 14 18:13:45.218328 containerd[1587]: time="2025-05-14T18:13:45.218302677Z" level=info msg="CreateContainer within sandbox \"c4afdccfbe97d2368941a36113a2eca885dcf3d685e6bc407b00c6eb03f50d01\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 18:13:45.221078 containerd[1587]: time="2025-05-14T18:13:45.221043029Z" level=info msg="CreateContainer within sandbox \"bfd836886d420c11586bd0d4066d1e8889845143df00f296e27beb7fb3b682e5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 18:13:45.227853 containerd[1587]: time="2025-05-14T18:13:45.227815788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3a673eff765630e384100cb47c695ff5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c59cbdc2d325330750e673ec4b229dcbdef9dbfed0e61334e752c6a86c1ba10\"" May 14 18:13:45.230126 containerd[1587]: time="2025-05-14T18:13:45.229758433Z" level=info msg="CreateContainer within sandbox \"1c59cbdc2d325330750e673ec4b229dcbdef9dbfed0e61334e752c6a86c1ba10\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 18:13:45.238929 containerd[1587]: time="2025-05-14T18:13:45.238886662Z" level=info msg="Container 8c412b052dab8ac4c8944dafafe2312128216d32a20719f021ba000c5d58bb11: CDI devices from CRI Config.CDIDevices: []" May 14 18:13:45.242750 containerd[1587]: time="2025-05-14T18:13:45.242712871Z" level=info msg="Container 0951e1334cf586bc89688813a97ab6ae02772dd4eb1e1b02381c61f826a9d1e0: CDI devices from CRI Config.CDIDevices: []" May 14 18:13:45.249841 containerd[1587]: time="2025-05-14T18:13:45.249774913Z" level=info msg="Container 49c1924040cc981ce3034f03f970ef3058e0e4cd551c83f8f8064f7dd7285c5c: CDI devices from CRI Config.CDIDevices: []" May 14 18:13:45.250429 containerd[1587]: time="2025-05-14T18:13:45.250398072Z" level=info msg="CreateContainer within sandbox \"c4afdccfbe97d2368941a36113a2eca885dcf3d685e6bc407b00c6eb03f50d01\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8c412b052dab8ac4c8944dafafe2312128216d32a20719f021ba000c5d58bb11\"" May 14 18:13:45.251346 containerd[1587]: time="2025-05-14T18:13:45.251311706Z" level=info msg="StartContainer for \"8c412b052dab8ac4c8944dafafe2312128216d32a20719f021ba000c5d58bb11\"" May 14 18:13:45.252498 containerd[1587]: time="2025-05-14T18:13:45.252459510Z" level=info msg="connecting to shim 8c412b052dab8ac4c8944dafafe2312128216d32a20719f021ba000c5d58bb11" address="unix:///run/containerd/s/84ac9023b2df1d902ece379906e55ab1391b01a6a66c4305b4f161596b2f3e88" protocol=ttrpc version=3 May 14 18:13:45.259876 containerd[1587]: time="2025-05-14T18:13:45.259786038Z" level=info msg="CreateContainer within sandbox \"1c59cbdc2d325330750e673ec4b229dcbdef9dbfed0e61334e752c6a86c1ba10\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"49c1924040cc981ce3034f03f970ef3058e0e4cd551c83f8f8064f7dd7285c5c\"" May 14 18:13:45.260783 containerd[1587]: time="2025-05-14T18:13:45.260691116Z" level=info msg="StartContainer for \"49c1924040cc981ce3034f03f970ef3058e0e4cd551c83f8f8064f7dd7285c5c\"" May 14 18:13:45.261962 containerd[1587]: time="2025-05-14T18:13:45.261936914Z" level=info msg="CreateContainer within sandbox \"bfd836886d420c11586bd0d4066d1e8889845143df00f296e27beb7fb3b682e5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0951e1334cf586bc89688813a97ab6ae02772dd4eb1e1b02381c61f826a9d1e0\"" May 14 18:13:45.262449 containerd[1587]: time="2025-05-14T18:13:45.262404832Z" level=info msg="connecting to shim 49c1924040cc981ce3034f03f970ef3058e0e4cd551c83f8f8064f7dd7285c5c" address="unix:///run/containerd/s/944fc1df5cc83a8452bf8b966319b0d3448f108381d52f08bdb9a8f48170fd27" protocol=ttrpc version=3 May 14 18:13:45.262574 containerd[1587]: time="2025-05-14T18:13:45.262448915Z" level=info msg="StartContainer for \"0951e1334cf586bc89688813a97ab6ae02772dd4eb1e1b02381c61f826a9d1e0\"" May 14 18:13:45.267972 containerd[1587]: time="2025-05-14T18:13:45.267328431Z" level=info msg="connecting to shim 0951e1334cf586bc89688813a97ab6ae02772dd4eb1e1b02381c61f826a9d1e0" address="unix:///run/containerd/s/1caa8fd0a89a5927934aa6f64634ce4b5afb1aeafd740f86813ed7f503193bcf" protocol=ttrpc version=3 May 14 18:13:45.275334 systemd[1]: Started cri-containerd-8c412b052dab8ac4c8944dafafe2312128216d32a20719f021ba000c5d58bb11.scope - libcontainer container 8c412b052dab8ac4c8944dafafe2312128216d32a20719f021ba000c5d58bb11. May 14 18:13:45.299363 systemd[1]: Started cri-containerd-0951e1334cf586bc89688813a97ab6ae02772dd4eb1e1b02381c61f826a9d1e0.scope - libcontainer container 0951e1334cf586bc89688813a97ab6ae02772dd4eb1e1b02381c61f826a9d1e0. May 14 18:13:45.300958 systemd[1]: Started cri-containerd-49c1924040cc981ce3034f03f970ef3058e0e4cd551c83f8f8064f7dd7285c5c.scope - libcontainer container 49c1924040cc981ce3034f03f970ef3058e0e4cd551c83f8f8064f7dd7285c5c. May 14 18:13:45.348183 containerd[1587]: time="2025-05-14T18:13:45.348120261Z" level=info msg="StartContainer for \"8c412b052dab8ac4c8944dafafe2312128216d32a20719f021ba000c5d58bb11\" returns successfully" May 14 18:13:45.363361 containerd[1587]: time="2025-05-14T18:13:45.363296287Z" level=info msg="StartContainer for \"49c1924040cc981ce3034f03f970ef3058e0e4cd551c83f8f8064f7dd7285c5c\" returns successfully" May 14 18:13:45.373864 containerd[1587]: time="2025-05-14T18:13:45.373821046Z" level=info msg="StartContainer for \"0951e1334cf586bc89688813a97ab6ae02772dd4eb1e1b02381c61f826a9d1e0\" returns successfully" May 14 18:13:45.512209 kubelet[2348]: I0514 18:13:45.512063 2348 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:13:46.470962 kubelet[2348]: I0514 18:13:46.470905 2348 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 18:13:46.470962 kubelet[2348]: E0514 18:13:46.470962 2348 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 14 18:13:46.479894 kubelet[2348]: E0514 18:13:46.479822 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:46.580487 kubelet[2348]: E0514 18:13:46.580407 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:46.680658 kubelet[2348]: E0514 18:13:46.680584 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:46.780718 kubelet[2348]: E0514 18:13:46.780676 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:46.881520 kubelet[2348]: E0514 18:13:46.881468 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:46.982241 kubelet[2348]: E0514 18:13:46.982179 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:47.083301 kubelet[2348]: E0514 18:13:47.083135 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:47.183463 kubelet[2348]: E0514 18:13:47.183413 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:47.283732 kubelet[2348]: E0514 18:13:47.283689 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:47.384397 kubelet[2348]: E0514 18:13:47.384290 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:47.485178 kubelet[2348]: E0514 18:13:47.485130 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:47.585849 kubelet[2348]: E0514 18:13:47.585783 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:47.686767 kubelet[2348]: E0514 18:13:47.686623 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:47.787066 kubelet[2348]: E0514 18:13:47.786938 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:47.887540 kubelet[2348]: E0514 18:13:47.887498 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:47.988333 kubelet[2348]: E0514 18:13:47.988158 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:48.088937 kubelet[2348]: E0514 18:13:48.088896 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:48.189384 kubelet[2348]: E0514 18:13:48.189304 2348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:48.674543 kubelet[2348]: I0514 18:13:48.674497 2348 apiserver.go:52] "Watching apiserver" May 14 18:13:48.680174 kubelet[2348]: I0514 18:13:48.680145 2348 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 18:13:48.708541 systemd[1]: Reload requested from client PID 2622 ('systemctl') (unit session-7.scope)... May 14 18:13:48.708563 systemd[1]: Reloading... May 14 18:13:48.792123 zram_generator::config[2668]: No configuration found. May 14 18:13:48.883144 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 18:13:49.013047 systemd[1]: Reloading finished in 304 ms. May 14 18:13:49.038388 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:13:49.056693 systemd[1]: kubelet.service: Deactivated successfully. May 14 18:13:49.056993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:13:49.057067 systemd[1]: kubelet.service: Consumed 811ms CPU time, 116.8M memory peak. May 14 18:13:49.059083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 18:13:49.281483 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 18:13:49.292546 (kubelet)[2710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 18:13:49.335499 kubelet[2710]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:13:49.335499 kubelet[2710]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 18:13:49.335499 kubelet[2710]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 18:13:49.335945 kubelet[2710]: I0514 18:13:49.335544 2710 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 18:13:49.341727 kubelet[2710]: I0514 18:13:49.341689 2710 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 18:13:49.341727 kubelet[2710]: I0514 18:13:49.341716 2710 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 18:13:49.341938 kubelet[2710]: I0514 18:13:49.341914 2710 server.go:929] "Client rotation is on, will bootstrap in background" May 14 18:13:49.343019 kubelet[2710]: I0514 18:13:49.342996 2710 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 18:13:49.346162 kubelet[2710]: I0514 18:13:49.346132 2710 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 18:13:49.350074 kubelet[2710]: I0514 18:13:49.350040 2710 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 18:13:49.354611 kubelet[2710]: I0514 18:13:49.354570 2710 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 18:13:49.354772 kubelet[2710]: I0514 18:13:49.354749 2710 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 18:13:49.354925 kubelet[2710]: I0514 18:13:49.354886 2710 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 18:13:49.355122 kubelet[2710]: I0514 18:13:49.354917 2710 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 18:13:49.355122 kubelet[2710]: I0514 18:13:49.355125 2710 topology_manager.go:138] "Creating topology manager with none policy" May 14 18:13:49.355241 kubelet[2710]: I0514 18:13:49.355134 2710 container_manager_linux.go:300] "Creating device plugin manager" May 14 18:13:49.355241 kubelet[2710]: I0514 18:13:49.355168 2710 state_mem.go:36] "Initialized new in-memory state store" May 14 18:13:49.355301 kubelet[2710]: I0514 18:13:49.355288 2710 kubelet.go:408] "Attempting to sync node with API server" May 14 18:13:49.355326 kubelet[2710]: I0514 18:13:49.355303 2710 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 18:13:49.355347 kubelet[2710]: I0514 18:13:49.355336 2710 kubelet.go:314] "Adding apiserver pod source" May 14 18:13:49.355369 kubelet[2710]: I0514 18:13:49.355355 2710 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 18:13:49.356473 kubelet[2710]: I0514 18:13:49.356450 2710 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 18:13:49.356862 kubelet[2710]: I0514 18:13:49.356844 2710 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 18:13:49.357407 kubelet[2710]: I0514 18:13:49.357322 2710 server.go:1269] "Started kubelet" May 14 18:13:49.357596 kubelet[2710]: I0514 18:13:49.357543 2710 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 18:13:49.357931 kubelet[2710]: I0514 18:13:49.357915 2710 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 18:13:49.359203 kubelet[2710]: I0514 18:13:49.358823 2710 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 18:13:49.359840 kubelet[2710]: I0514 18:13:49.359810 2710 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 18:13:49.365650 kubelet[2710]: I0514 18:13:49.365620 2710 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 18:13:49.365747 kubelet[2710]: I0514 18:13:49.365726 2710 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 18:13:49.365891 kubelet[2710]: I0514 18:13:49.365870 2710 reconciler.go:26] "Reconciler: start to sync state" May 14 18:13:49.367034 kubelet[2710]: E0514 18:13:49.367001 2710 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 18:13:49.367716 kubelet[2710]: I0514 18:13:49.367292 2710 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 18:13:49.371490 kubelet[2710]: I0514 18:13:49.369830 2710 server.go:460] "Adding debug handlers to kubelet server" May 14 18:13:49.371490 kubelet[2710]: I0514 18:13:49.370186 2710 factory.go:221] Registration of the systemd container factory successfully May 14 18:13:49.371490 kubelet[2710]: I0514 18:13:49.370306 2710 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 18:13:49.374643 kubelet[2710]: I0514 18:13:49.374603 2710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 18:13:49.375171 kubelet[2710]: E0514 18:13:49.375136 2710 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 18:13:49.376081 kubelet[2710]: I0514 18:13:49.376028 2710 factory.go:221] Registration of the containerd container factory successfully May 14 18:13:49.377740 kubelet[2710]: I0514 18:13:49.377246 2710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 18:13:49.377740 kubelet[2710]: I0514 18:13:49.377282 2710 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 18:13:49.377740 kubelet[2710]: I0514 18:13:49.377311 2710 kubelet.go:2321] "Starting kubelet main sync loop" May 14 18:13:49.377740 kubelet[2710]: E0514 18:13:49.377368 2710 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 18:13:49.409479 kubelet[2710]: I0514 18:13:49.409445 2710 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 18:13:49.409479 kubelet[2710]: I0514 18:13:49.409466 2710 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 18:13:49.409479 kubelet[2710]: I0514 18:13:49.409487 2710 state_mem.go:36] "Initialized new in-memory state store" May 14 18:13:49.409692 kubelet[2710]: I0514 18:13:49.409641 2710 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 18:13:49.409692 kubelet[2710]: I0514 18:13:49.409652 2710 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 18:13:49.409692 kubelet[2710]: I0514 18:13:49.409671 2710 policy_none.go:49] "None policy: Start" May 14 18:13:49.410334 kubelet[2710]: I0514 18:13:49.410317 2710 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 18:13:49.410382 kubelet[2710]: I0514 18:13:49.410338 2710 state_mem.go:35] "Initializing new in-memory state store" May 14 18:13:49.410546 kubelet[2710]: I0514 18:13:49.410519 2710 state_mem.go:75] "Updated machine memory state" May 14 18:13:49.415615 kubelet[2710]: I0514 18:13:49.415496 2710 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 18:13:49.415715 kubelet[2710]: I0514 18:13:49.415692 2710 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 18:13:49.415741 kubelet[2710]: I0514 18:13:49.415710 2710 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 18:13:49.415999 kubelet[2710]: I0514 18:13:49.415913 2710 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 18:13:49.518012 kubelet[2710]: I0514 18:13:49.517976 2710 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 18:13:49.539178 kubelet[2710]: I0514 18:13:49.537406 2710 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 14 18:13:49.539178 kubelet[2710]: I0514 18:13:49.537504 2710 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 18:13:49.567656 kubelet[2710]: I0514 18:13:49.567584 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:13:49.567656 kubelet[2710]: I0514 18:13:49.567633 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:13:49.567656 kubelet[2710]: I0514 18:13:49.567663 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:13:49.567875 kubelet[2710]: I0514 18:13:49.567686 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a673eff765630e384100cb47c695ff5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3a673eff765630e384100cb47c695ff5\") " pod="kube-system/kube-apiserver-localhost" May 14 18:13:49.567875 kubelet[2710]: I0514 18:13:49.567707 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:13:49.567875 kubelet[2710]: I0514 18:13:49.567726 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 18:13:49.567875 kubelet[2710]: I0514 18:13:49.567774 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 18:13:49.567875 kubelet[2710]: I0514 18:13:49.567794 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a673eff765630e384100cb47c695ff5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3a673eff765630e384100cb47c695ff5\") " pod="kube-system/kube-apiserver-localhost" May 14 18:13:49.568035 kubelet[2710]: I0514 18:13:49.567813 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a673eff765630e384100cb47c695ff5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3a673eff765630e384100cb47c695ff5\") " pod="kube-system/kube-apiserver-localhost" May 14 18:13:50.356794 kubelet[2710]: I0514 18:13:50.356743 2710 apiserver.go:52] "Watching apiserver" May 14 18:13:50.366808 kubelet[2710]: I0514 18:13:50.366762 2710 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 18:13:50.537382 kubelet[2710]: E0514 18:13:50.537344 2710 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 18:13:50.553882 kubelet[2710]: I0514 18:13:50.553782 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5537629979999998 podStartE2EDuration="1.553762998s" podCreationTimestamp="2025-05-14 18:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:13:50.435171897 +0000 UTC m=+1.138265183" watchObservedRunningTime="2025-05-14 18:13:50.553762998 +0000 UTC m=+1.256856284" May 14 18:13:50.599651 kubelet[2710]: I0514 18:13:50.599547 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.599526289 podStartE2EDuration="1.599526289s" podCreationTimestamp="2025-05-14 18:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:13:50.553923606 +0000 UTC m=+1.257016892" watchObservedRunningTime="2025-05-14 18:13:50.599526289 +0000 UTC m=+1.302619585" May 14 18:13:50.611621 kubelet[2710]: I0514 18:13:50.611450 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.611429102 podStartE2EDuration="1.611429102s" podCreationTimestamp="2025-05-14 18:13:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:13:50.599482215 +0000 UTC m=+1.302575502" watchObservedRunningTime="2025-05-14 18:13:50.611429102 +0000 UTC m=+1.314522388" May 14 18:13:54.010964 sudo[1795]: pam_unix(sudo:session): session closed for user root May 14 18:13:54.012652 sshd[1794]: Connection closed by 10.0.0.1 port 52938 May 14 18:13:54.012997 sshd-session[1792]: pam_unix(sshd:session): session closed for user core May 14 18:13:54.017181 systemd[1]: sshd@6-10.0.0.145:22-10.0.0.1:52938.service: Deactivated successfully. May 14 18:13:54.019426 systemd[1]: session-7.scope: Deactivated successfully. May 14 18:13:54.019645 systemd[1]: session-7.scope: Consumed 4.751s CPU time, 224.8M memory peak. May 14 18:13:54.021061 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. May 14 18:13:54.022443 systemd-logind[1574]: Removed session 7. May 14 18:13:54.295884 kubelet[2710]: I0514 18:13:54.295834 2710 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 18:13:54.296393 containerd[1587]: time="2025-05-14T18:13:54.296116615Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 18:13:54.296660 kubelet[2710]: I0514 18:13:54.296448 2710 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 18:13:55.145734 systemd[1]: Created slice kubepods-besteffort-podff4e80e1_c0a0_4f35_9f3c_a90d7841bd4a.slice - libcontainer container kubepods-besteffort-podff4e80e1_c0a0_4f35_9f3c_a90d7841bd4a.slice. May 14 18:13:55.199885 kubelet[2710]: I0514 18:13:55.199842 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff4e80e1-c0a0-4f35-9f3c-a90d7841bd4a-kube-proxy\") pod \"kube-proxy-dplpv\" (UID: \"ff4e80e1-c0a0-4f35-9f3c-a90d7841bd4a\") " pod="kube-system/kube-proxy-dplpv" May 14 18:13:55.199885 kubelet[2710]: I0514 18:13:55.199887 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff4e80e1-c0a0-4f35-9f3c-a90d7841bd4a-xtables-lock\") pod \"kube-proxy-dplpv\" (UID: \"ff4e80e1-c0a0-4f35-9f3c-a90d7841bd4a\") " pod="kube-system/kube-proxy-dplpv" May 14 18:13:55.200058 kubelet[2710]: I0514 18:13:55.199908 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff4e80e1-c0a0-4f35-9f3c-a90d7841bd4a-lib-modules\") pod \"kube-proxy-dplpv\" (UID: \"ff4e80e1-c0a0-4f35-9f3c-a90d7841bd4a\") " pod="kube-system/kube-proxy-dplpv" May 14 18:13:55.200058 kubelet[2710]: I0514 18:13:55.199929 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9824\" (UniqueName: \"kubernetes.io/projected/ff4e80e1-c0a0-4f35-9f3c-a90d7841bd4a-kube-api-access-b9824\") pod \"kube-proxy-dplpv\" (UID: \"ff4e80e1-c0a0-4f35-9f3c-a90d7841bd4a\") " pod="kube-system/kube-proxy-dplpv" May 14 18:13:55.458888 containerd[1587]: time="2025-05-14T18:13:55.458733884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dplpv,Uid:ff4e80e1-c0a0-4f35-9f3c-a90d7841bd4a,Namespace:kube-system,Attempt:0,}" May 14 18:13:55.464065 systemd[1]: Created slice kubepods-besteffort-podeb7f99a7_169f_4c90_a835_f5ee81660dc9.slice - libcontainer container kubepods-besteffort-podeb7f99a7_169f_4c90_a835_f5ee81660dc9.slice. May 14 18:13:55.490915 containerd[1587]: time="2025-05-14T18:13:55.490846786Z" level=info msg="connecting to shim 04a522088a851be3f56af683cfaa3b2ad3e7d7bb2f93644ec1072e742961ab4e" address="unix:///run/containerd/s/3f470f3b783b69c9c824aa5df281472ea3872201b19072a5c8cd0e9cdab386f7" namespace=k8s.io protocol=ttrpc version=3 May 14 18:13:55.502320 kubelet[2710]: I0514 18:13:55.502262 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvxth\" (UniqueName: \"kubernetes.io/projected/eb7f99a7-169f-4c90-a835-f5ee81660dc9-kube-api-access-cvxth\") pod \"tigera-operator-6f6897fdc5-v8s47\" (UID: \"eb7f99a7-169f-4c90-a835-f5ee81660dc9\") " pod="tigera-operator/tigera-operator-6f6897fdc5-v8s47" May 14 18:13:55.502320 kubelet[2710]: I0514 18:13:55.502305 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eb7f99a7-169f-4c90-a835-f5ee81660dc9-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-v8s47\" (UID: \"eb7f99a7-169f-4c90-a835-f5ee81660dc9\") " pod="tigera-operator/tigera-operator-6f6897fdc5-v8s47" May 14 18:13:55.525232 systemd[1]: Started cri-containerd-04a522088a851be3f56af683cfaa3b2ad3e7d7bb2f93644ec1072e742961ab4e.scope - libcontainer container 04a522088a851be3f56af683cfaa3b2ad3e7d7bb2f93644ec1072e742961ab4e. May 14 18:13:55.603518 containerd[1587]: time="2025-05-14T18:13:55.603449985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dplpv,Uid:ff4e80e1-c0a0-4f35-9f3c-a90d7841bd4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"04a522088a851be3f56af683cfaa3b2ad3e7d7bb2f93644ec1072e742961ab4e\"" May 14 18:13:55.606048 containerd[1587]: time="2025-05-14T18:13:55.606010021Z" level=info msg="CreateContainer within sandbox \"04a522088a851be3f56af683cfaa3b2ad3e7d7bb2f93644ec1072e742961ab4e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 18:13:55.631746 containerd[1587]: time="2025-05-14T18:13:55.631672907Z" level=info msg="Container ab62528358a5613f88c6d6d54d941db1057807f905af3dd80f4b67adae1bb93f: CDI devices from CRI Config.CDIDevices: []" May 14 18:13:55.642081 containerd[1587]: time="2025-05-14T18:13:55.642032511Z" level=info msg="CreateContainer within sandbox \"04a522088a851be3f56af683cfaa3b2ad3e7d7bb2f93644ec1072e742961ab4e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ab62528358a5613f88c6d6d54d941db1057807f905af3dd80f4b67adae1bb93f\"" May 14 18:13:55.642679 containerd[1587]: time="2025-05-14T18:13:55.642635199Z" level=info msg="StartContainer for \"ab62528358a5613f88c6d6d54d941db1057807f905af3dd80f4b67adae1bb93f\"" May 14 18:13:55.644316 containerd[1587]: time="2025-05-14T18:13:55.644291544Z" level=info msg="connecting to shim ab62528358a5613f88c6d6d54d941db1057807f905af3dd80f4b67adae1bb93f" address="unix:///run/containerd/s/3f470f3b783b69c9c824aa5df281472ea3872201b19072a5c8cd0e9cdab386f7" protocol=ttrpc version=3 May 14 18:13:55.663227 systemd[1]: Started cri-containerd-ab62528358a5613f88c6d6d54d941db1057807f905af3dd80f4b67adae1bb93f.scope - libcontainer container ab62528358a5613f88c6d6d54d941db1057807f905af3dd80f4b67adae1bb93f. May 14 18:13:55.709485 containerd[1587]: time="2025-05-14T18:13:55.709254012Z" level=info msg="StartContainer for \"ab62528358a5613f88c6d6d54d941db1057807f905af3dd80f4b67adae1bb93f\" returns successfully" May 14 18:13:55.767505 containerd[1587]: time="2025-05-14T18:13:55.767454609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-v8s47,Uid:eb7f99a7-169f-4c90-a835-f5ee81660dc9,Namespace:tigera-operator,Attempt:0,}" May 14 18:13:55.788800 containerd[1587]: time="2025-05-14T18:13:55.788744063Z" level=info msg="connecting to shim caed3b5ea70abfa40e7faea0e8925e3f534f3606372b770938954f915f986fa9" address="unix:///run/containerd/s/b0a747a4efa9f88b0e96fd99f1eaf48f2f8ce18ba81afbe8403c59738cb8e72d" namespace=k8s.io protocol=ttrpc version=3 May 14 18:13:55.819456 systemd[1]: Started cri-containerd-caed3b5ea70abfa40e7faea0e8925e3f534f3606372b770938954f915f986fa9.scope - libcontainer container caed3b5ea70abfa40e7faea0e8925e3f534f3606372b770938954f915f986fa9. May 14 18:13:55.870956 containerd[1587]: time="2025-05-14T18:13:55.870909480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-v8s47,Uid:eb7f99a7-169f-4c90-a835-f5ee81660dc9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"caed3b5ea70abfa40e7faea0e8925e3f534f3606372b770938954f915f986fa9\"" May 14 18:13:55.873224 containerd[1587]: time="2025-05-14T18:13:55.872978551Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 18:13:56.333376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380208424.mount: Deactivated successfully. May 14 18:13:56.413587 kubelet[2710]: I0514 18:13:56.413514 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dplpv" podStartSLOduration=1.413493699 podStartE2EDuration="1.413493699s" podCreationTimestamp="2025-05-14 18:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:13:56.413313806 +0000 UTC m=+7.116407102" watchObservedRunningTime="2025-05-14 18:13:56.413493699 +0000 UTC m=+7.116586985" May 14 18:13:57.388899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3738121464.mount: Deactivated successfully. May 14 18:13:57.764868 containerd[1587]: time="2025-05-14T18:13:57.764746710Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:57.765532 containerd[1587]: time="2025-05-14T18:13:57.765481216Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 14 18:13:57.766685 containerd[1587]: time="2025-05-14T18:13:57.766648025Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:57.768338 containerd[1587]: time="2025-05-14T18:13:57.768305176Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:13:57.768875 containerd[1587]: time="2025-05-14T18:13:57.768838681Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 1.895826905s" May 14 18:13:57.768875 containerd[1587]: time="2025-05-14T18:13:57.768866253Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 14 18:13:57.770374 containerd[1587]: time="2025-05-14T18:13:57.770352760Z" level=info msg="CreateContainer within sandbox \"caed3b5ea70abfa40e7faea0e8925e3f534f3606372b770938954f915f986fa9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 18:13:57.778516 containerd[1587]: time="2025-05-14T18:13:57.778476716Z" level=info msg="Container be6cd28beaa48ebc59bc783a1cda2a2e6453bc9725644d22e4a7413204bed82e: CDI devices from CRI Config.CDIDevices: []" May 14 18:13:57.784412 containerd[1587]: time="2025-05-14T18:13:57.784377276Z" level=info msg="CreateContainer within sandbox \"caed3b5ea70abfa40e7faea0e8925e3f534f3606372b770938954f915f986fa9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"be6cd28beaa48ebc59bc783a1cda2a2e6453bc9725644d22e4a7413204bed82e\"" May 14 18:13:57.784775 containerd[1587]: time="2025-05-14T18:13:57.784742831Z" level=info msg="StartContainer for \"be6cd28beaa48ebc59bc783a1cda2a2e6453bc9725644d22e4a7413204bed82e\"" May 14 18:13:57.785462 containerd[1587]: time="2025-05-14T18:13:57.785422674Z" level=info msg="connecting to shim be6cd28beaa48ebc59bc783a1cda2a2e6453bc9725644d22e4a7413204bed82e" address="unix:///run/containerd/s/b0a747a4efa9f88b0e96fd99f1eaf48f2f8ce18ba81afbe8403c59738cb8e72d" protocol=ttrpc version=3 May 14 18:13:57.838271 systemd[1]: Started cri-containerd-be6cd28beaa48ebc59bc783a1cda2a2e6453bc9725644d22e4a7413204bed82e.scope - libcontainer container be6cd28beaa48ebc59bc783a1cda2a2e6453bc9725644d22e4a7413204bed82e. May 14 18:13:57.867564 containerd[1587]: time="2025-05-14T18:13:57.867530779Z" level=info msg="StartContainer for \"be6cd28beaa48ebc59bc783a1cda2a2e6453bc9725644d22e4a7413204bed82e\" returns successfully" May 14 18:13:58.416254 kubelet[2710]: I0514 18:13:58.416184 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-v8s47" podStartSLOduration=1.5190986 podStartE2EDuration="3.416160526s" podCreationTimestamp="2025-05-14 18:13:55 +0000 UTC" firstStartedPulling="2025-05-14 18:13:55.87245259 +0000 UTC m=+6.575545876" lastFinishedPulling="2025-05-14 18:13:57.769514516 +0000 UTC m=+8.472607802" observedRunningTime="2025-05-14 18:13:58.415648342 +0000 UTC m=+9.118741619" watchObservedRunningTime="2025-05-14 18:13:58.416160526 +0000 UTC m=+9.119253822" May 14 18:14:00.791249 systemd[1]: Created slice kubepods-besteffort-pod299337de_a7c9_4826_a8cb_0ff4ecc852c2.slice - libcontainer container kubepods-besteffort-pod299337de_a7c9_4826_a8cb_0ff4ecc852c2.slice. May 14 18:14:00.824590 systemd[1]: Created slice kubepods-besteffort-pod24f00d4a_28ce_4a29_b47b_b0fedd40d4a0.slice - libcontainer container kubepods-besteffort-pod24f00d4a_28ce_4a29_b47b_b0fedd40d4a0.slice. May 14 18:14:00.836692 kubelet[2710]: I0514 18:14:00.836656 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24f00d4a-28ce-4a29-b47b-b0fedd40d4a0-tigera-ca-bundle\") pod \"calico-node-xljdr\" (UID: \"24f00d4a-28ce-4a29-b47b-b0fedd40d4a0\") " pod="calico-system/calico-node-xljdr" May 14 18:14:00.837205 kubelet[2710]: I0514 18:14:00.837179 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/24f00d4a-28ce-4a29-b47b-b0fedd40d4a0-cni-net-dir\") pod \"calico-node-xljdr\" (UID: \"24f00d4a-28ce-4a29-b47b-b0fedd40d4a0\") " pod="calico-system/calico-node-xljdr" May 14 18:14:00.837205 kubelet[2710]: I0514 18:14:00.837204 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24f00d4a-28ce-4a29-b47b-b0fedd40d4a0-lib-modules\") pod \"calico-node-xljdr\" (UID: \"24f00d4a-28ce-4a29-b47b-b0fedd40d4a0\") " pod="calico-system/calico-node-xljdr" May 14 18:14:00.837308 kubelet[2710]: I0514 18:14:00.837224 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/24f00d4a-28ce-4a29-b47b-b0fedd40d4a0-var-run-calico\") pod \"calico-node-xljdr\" (UID: \"24f00d4a-28ce-4a29-b47b-b0fedd40d4a0\") " pod="calico-system/calico-node-xljdr" May 14 18:14:00.837308 kubelet[2710]: I0514 18:14:00.837238 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/299337de-a7c9-4826-a8cb-0ff4ecc852c2-typha-certs\") pod \"calico-typha-86f787b574-knxvz\" (UID: \"299337de-a7c9-4826-a8cb-0ff4ecc852c2\") " pod="calico-system/calico-typha-86f787b574-knxvz" May 14 18:14:00.837308 kubelet[2710]: I0514 18:14:00.837252 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/299337de-a7c9-4826-a8cb-0ff4ecc852c2-tigera-ca-bundle\") pod \"calico-typha-86f787b574-knxvz\" (UID: \"299337de-a7c9-4826-a8cb-0ff4ecc852c2\") " pod="calico-system/calico-typha-86f787b574-knxvz" May 14 18:14:00.837308 kubelet[2710]: I0514 18:14:00.837269 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24f00d4a-28ce-4a29-b47b-b0fedd40d4a0-xtables-lock\") pod \"calico-node-xljdr\" (UID: \"24f00d4a-28ce-4a29-b47b-b0fedd40d4a0\") " pod="calico-system/calico-node-xljdr" May 14 18:14:00.837308 kubelet[2710]: I0514 18:14:00.837283 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/24f00d4a-28ce-4a29-b47b-b0fedd40d4a0-cni-bin-dir\") pod \"calico-node-xljdr\" (UID: \"24f00d4a-28ce-4a29-b47b-b0fedd40d4a0\") " pod="calico-system/calico-node-xljdr" May 14 18:14:00.837460 kubelet[2710]: I0514 18:14:00.837296 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/24f00d4a-28ce-4a29-b47b-b0fedd40d4a0-cni-log-dir\") pod \"calico-node-xljdr\" (UID: \"24f00d4a-28ce-4a29-b47b-b0fedd40d4a0\") " pod="calico-system/calico-node-xljdr" May 14 18:14:00.837460 kubelet[2710]: I0514 18:14:00.837309 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwhrd\" (UniqueName: \"kubernetes.io/projected/299337de-a7c9-4826-a8cb-0ff4ecc852c2-kube-api-access-rwhrd\") pod \"calico-typha-86f787b574-knxvz\" (UID: \"299337de-a7c9-4826-a8cb-0ff4ecc852c2\") " pod="calico-system/calico-typha-86f787b574-knxvz" May 14 18:14:00.837460 kubelet[2710]: I0514 18:14:00.837323 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/24f00d4a-28ce-4a29-b47b-b0fedd40d4a0-policysync\") pod \"calico-node-xljdr\" (UID: \"24f00d4a-28ce-4a29-b47b-b0fedd40d4a0\") " pod="calico-system/calico-node-xljdr" May 14 18:14:00.837460 kubelet[2710]: I0514 18:14:00.837336 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/24f00d4a-28ce-4a29-b47b-b0fedd40d4a0-node-certs\") pod \"calico-node-xljdr\" (UID: \"24f00d4a-28ce-4a29-b47b-b0fedd40d4a0\") " pod="calico-system/calico-node-xljdr" May 14 18:14:00.837460 kubelet[2710]: I0514 18:14:00.837351 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/24f00d4a-28ce-4a29-b47b-b0fedd40d4a0-var-lib-calico\") pod \"calico-node-xljdr\" (UID: \"24f00d4a-28ce-4a29-b47b-b0fedd40d4a0\") " pod="calico-system/calico-node-xljdr" May 14 18:14:00.837615 kubelet[2710]: I0514 18:14:00.837363 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/24f00d4a-28ce-4a29-b47b-b0fedd40d4a0-flexvol-driver-host\") pod \"calico-node-xljdr\" (UID: \"24f00d4a-28ce-4a29-b47b-b0fedd40d4a0\") " pod="calico-system/calico-node-xljdr" May 14 18:14:00.837615 kubelet[2710]: I0514 18:14:00.837378 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d45d5\" (UniqueName: \"kubernetes.io/projected/24f00d4a-28ce-4a29-b47b-b0fedd40d4a0-kube-api-access-d45d5\") pod \"calico-node-xljdr\" (UID: \"24f00d4a-28ce-4a29-b47b-b0fedd40d4a0\") " pod="calico-system/calico-node-xljdr" May 14 18:14:00.928715 kubelet[2710]: E0514 18:14:00.928529 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7b4g" podUID="d1e02241-9d67-43e1-bd15-f6a1549c9972" May 14 18:14:00.947567 kubelet[2710]: E0514 18:14:00.947429 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:00.947567 kubelet[2710]: W0514 18:14:00.947505 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:00.950980 kubelet[2710]: E0514 18:14:00.949911 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:00.954413 kubelet[2710]: E0514 18:14:00.954259 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:00.954413 kubelet[2710]: W0514 18:14:00.954282 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:00.954413 kubelet[2710]: E0514 18:14:00.954300 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:00.956340 kubelet[2710]: E0514 18:14:00.956313 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:00.956432 kubelet[2710]: W0514 18:14:00.956359 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:00.956432 kubelet[2710]: E0514 18:14:00.956379 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:00.956854 kubelet[2710]: E0514 18:14:00.956827 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:00.956854 kubelet[2710]: W0514 18:14:00.956848 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:00.956917 kubelet[2710]: E0514 18:14:00.956869 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:00.958378 kubelet[2710]: E0514 18:14:00.958351 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:00.958378 kubelet[2710]: W0514 18:14:00.958369 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:00.958530 kubelet[2710]: E0514 18:14:00.958380 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.024954 kubelet[2710]: E0514 18:14:01.024874 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.024954 kubelet[2710]: W0514 18:14:01.024916 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.024954 kubelet[2710]: E0514 18:14:01.024937 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.025317 kubelet[2710]: E0514 18:14:01.025071 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.025317 kubelet[2710]: W0514 18:14:01.025077 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.025317 kubelet[2710]: E0514 18:14:01.025103 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.025317 kubelet[2710]: E0514 18:14:01.025317 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.025432 kubelet[2710]: W0514 18:14:01.025324 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.025432 kubelet[2710]: E0514 18:14:01.025332 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.025505 kubelet[2710]: E0514 18:14:01.025482 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.025505 kubelet[2710]: W0514 18:14:01.025494 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.025505 kubelet[2710]: E0514 18:14:01.025502 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.025853 kubelet[2710]: E0514 18:14:01.025815 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.025853 kubelet[2710]: W0514 18:14:01.025841 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.026026 kubelet[2710]: E0514 18:14:01.025867 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.026111 kubelet[2710]: E0514 18:14:01.026096 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.026111 kubelet[2710]: W0514 18:14:01.026107 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.026194 kubelet[2710]: E0514 18:14:01.026115 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.026316 kubelet[2710]: E0514 18:14:01.026289 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.026316 kubelet[2710]: W0514 18:14:01.026304 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.026316 kubelet[2710]: E0514 18:14:01.026311 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.026720 kubelet[2710]: E0514 18:14:01.026686 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.026720 kubelet[2710]: W0514 18:14:01.026700 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.026720 kubelet[2710]: E0514 18:14:01.026712 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.026922 kubelet[2710]: E0514 18:14:01.026906 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.026922 kubelet[2710]: W0514 18:14:01.026919 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.027009 kubelet[2710]: E0514 18:14:01.026928 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.027100 kubelet[2710]: E0514 18:14:01.027070 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.027100 kubelet[2710]: W0514 18:14:01.027097 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.027170 kubelet[2710]: E0514 18:14:01.027107 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.027277 kubelet[2710]: E0514 18:14:01.027262 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.027277 kubelet[2710]: W0514 18:14:01.027272 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.027328 kubelet[2710]: E0514 18:14:01.027280 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.027471 kubelet[2710]: E0514 18:14:01.027454 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.027471 kubelet[2710]: W0514 18:14:01.027466 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.027471 kubelet[2710]: E0514 18:14:01.027474 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.027644 kubelet[2710]: E0514 18:14:01.027633 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.027644 kubelet[2710]: W0514 18:14:01.027642 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.027752 kubelet[2710]: E0514 18:14:01.027651 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.027837 kubelet[2710]: E0514 18:14:01.027825 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.027837 kubelet[2710]: W0514 18:14:01.027835 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.027903 kubelet[2710]: E0514 18:14:01.027843 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.028097 kubelet[2710]: E0514 18:14:01.028066 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.028174 kubelet[2710]: W0514 18:14:01.028081 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.028174 kubelet[2710]: E0514 18:14:01.028122 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.028368 kubelet[2710]: E0514 18:14:01.028352 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.028368 kubelet[2710]: W0514 18:14:01.028364 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.028418 kubelet[2710]: E0514 18:14:01.028383 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.028569 kubelet[2710]: E0514 18:14:01.028552 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.028569 kubelet[2710]: W0514 18:14:01.028564 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.028648 kubelet[2710]: E0514 18:14:01.028576 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.028766 kubelet[2710]: E0514 18:14:01.028751 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.028766 kubelet[2710]: W0514 18:14:01.028762 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.028813 kubelet[2710]: E0514 18:14:01.028773 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.028962 kubelet[2710]: E0514 18:14:01.028945 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.028962 kubelet[2710]: W0514 18:14:01.028958 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.029118 kubelet[2710]: E0514 18:14:01.028966 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.029179 kubelet[2710]: E0514 18:14:01.029160 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.029179 kubelet[2710]: W0514 18:14:01.029175 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.029230 kubelet[2710]: E0514 18:14:01.029184 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.038552 kubelet[2710]: E0514 18:14:01.038524 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.038552 kubelet[2710]: W0514 18:14:01.038549 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.038665 kubelet[2710]: E0514 18:14:01.038571 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.038665 kubelet[2710]: I0514 18:14:01.038615 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d1e02241-9d67-43e1-bd15-f6a1549c9972-varrun\") pod \"csi-node-driver-l7b4g\" (UID: \"d1e02241-9d67-43e1-bd15-f6a1549c9972\") " pod="calico-system/csi-node-driver-l7b4g" May 14 18:14:01.038809 kubelet[2710]: E0514 18:14:01.038792 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.038809 kubelet[2710]: W0514 18:14:01.038803 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.038866 kubelet[2710]: E0514 18:14:01.038817 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.038866 kubelet[2710]: I0514 18:14:01.038836 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1e02241-9d67-43e1-bd15-f6a1549c9972-kubelet-dir\") pod \"csi-node-driver-l7b4g\" (UID: \"d1e02241-9d67-43e1-bd15-f6a1549c9972\") " pod="calico-system/csi-node-driver-l7b4g" May 14 18:14:01.039099 kubelet[2710]: E0514 18:14:01.039056 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.039099 kubelet[2710]: W0514 18:14:01.039078 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.039210 kubelet[2710]: E0514 18:14:01.039117 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.039210 kubelet[2710]: I0514 18:14:01.039162 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d1e02241-9d67-43e1-bd15-f6a1549c9972-socket-dir\") pod \"csi-node-driver-l7b4g\" (UID: \"d1e02241-9d67-43e1-bd15-f6a1549c9972\") " pod="calico-system/csi-node-driver-l7b4g" May 14 18:14:01.039396 kubelet[2710]: E0514 18:14:01.039379 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.039396 kubelet[2710]: W0514 18:14:01.039392 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.039472 kubelet[2710]: E0514 18:14:01.039407 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.039472 kubelet[2710]: I0514 18:14:01.039422 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d1e02241-9d67-43e1-bd15-f6a1549c9972-registration-dir\") pod \"csi-node-driver-l7b4g\" (UID: \"d1e02241-9d67-43e1-bd15-f6a1549c9972\") " pod="calico-system/csi-node-driver-l7b4g" May 14 18:14:01.039649 kubelet[2710]: E0514 18:14:01.039631 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.039649 kubelet[2710]: W0514 18:14:01.039645 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.039719 kubelet[2710]: E0514 18:14:01.039660 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.039860 kubelet[2710]: E0514 18:14:01.039844 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.039860 kubelet[2710]: W0514 18:14:01.039855 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.039921 kubelet[2710]: E0514 18:14:01.039871 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.040066 kubelet[2710]: E0514 18:14:01.040050 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.040066 kubelet[2710]: W0514 18:14:01.040062 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.040134 kubelet[2710]: E0514 18:14:01.040113 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.040337 kubelet[2710]: E0514 18:14:01.040319 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.040391 kubelet[2710]: W0514 18:14:01.040331 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.040391 kubelet[2710]: E0514 18:14:01.040370 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.040613 kubelet[2710]: E0514 18:14:01.040586 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.040613 kubelet[2710]: W0514 18:14:01.040603 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.040736 kubelet[2710]: E0514 18:14:01.040620 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.040814 kubelet[2710]: E0514 18:14:01.040768 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.040814 kubelet[2710]: W0514 18:14:01.040775 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.040814 kubelet[2710]: E0514 18:14:01.040803 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.040978 kubelet[2710]: E0514 18:14:01.040961 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.040978 kubelet[2710]: W0514 18:14:01.040973 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.041064 kubelet[2710]: E0514 18:14:01.041003 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.041064 kubelet[2710]: I0514 18:14:01.041027 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvl7r\" (UniqueName: \"kubernetes.io/projected/d1e02241-9d67-43e1-bd15-f6a1549c9972-kube-api-access-kvl7r\") pod \"csi-node-driver-l7b4g\" (UID: \"d1e02241-9d67-43e1-bd15-f6a1549c9972\") " pod="calico-system/csi-node-driver-l7b4g" May 14 18:14:01.041164 kubelet[2710]: E0514 18:14:01.041140 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.041164 kubelet[2710]: W0514 18:14:01.041155 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.041216 kubelet[2710]: E0514 18:14:01.041168 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.041386 kubelet[2710]: E0514 18:14:01.041320 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.041386 kubelet[2710]: W0514 18:14:01.041330 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.041386 kubelet[2710]: E0514 18:14:01.041342 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.041535 kubelet[2710]: E0514 18:14:01.041517 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.041535 kubelet[2710]: W0514 18:14:01.041528 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.041612 kubelet[2710]: E0514 18:14:01.041546 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.041738 kubelet[2710]: E0514 18:14:01.041717 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.041738 kubelet[2710]: W0514 18:14:01.041732 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.041832 kubelet[2710]: E0514 18:14:01.041746 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.096015 containerd[1587]: time="2025-05-14T18:14:01.095968717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86f787b574-knxvz,Uid:299337de-a7c9-4826-a8cb-0ff4ecc852c2,Namespace:calico-system,Attempt:0,}" May 14 18:14:01.127772 containerd[1587]: time="2025-05-14T18:14:01.127722384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xljdr,Uid:24f00d4a-28ce-4a29-b47b-b0fedd40d4a0,Namespace:calico-system,Attempt:0,}" May 14 18:14:01.142433 kubelet[2710]: E0514 18:14:01.142398 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.142433 kubelet[2710]: W0514 18:14:01.142420 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.142529 kubelet[2710]: E0514 18:14:01.142441 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.142664 kubelet[2710]: E0514 18:14:01.142639 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.142664 kubelet[2710]: W0514 18:14:01.142653 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.142723 kubelet[2710]: E0514 18:14:01.142667 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.142887 kubelet[2710]: E0514 18:14:01.142867 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.142887 kubelet[2710]: W0514 18:14:01.142880 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.142949 kubelet[2710]: E0514 18:14:01.142897 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.143234 kubelet[2710]: E0514 18:14:01.143199 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.143234 kubelet[2710]: W0514 18:14:01.143220 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.143285 kubelet[2710]: E0514 18:14:01.143246 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.143499 kubelet[2710]: E0514 18:14:01.143466 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.143499 kubelet[2710]: W0514 18:14:01.143486 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.143675 kubelet[2710]: E0514 18:14:01.143513 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.143710 kubelet[2710]: E0514 18:14:01.143698 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.143710 kubelet[2710]: W0514 18:14:01.143707 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.143765 kubelet[2710]: E0514 18:14:01.143722 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.143911 kubelet[2710]: E0514 18:14:01.143894 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.143911 kubelet[2710]: W0514 18:14:01.143904 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.143984 kubelet[2710]: E0514 18:14:01.143919 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.144134 kubelet[2710]: E0514 18:14:01.144115 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.144134 kubelet[2710]: W0514 18:14:01.144127 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.144239 kubelet[2710]: E0514 18:14:01.144170 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.144298 kubelet[2710]: E0514 18:14:01.144281 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.144298 kubelet[2710]: W0514 18:14:01.144291 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.144350 kubelet[2710]: E0514 18:14:01.144319 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.144479 kubelet[2710]: E0514 18:14:01.144457 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.144479 kubelet[2710]: W0514 18:14:01.144468 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.144522 kubelet[2710]: E0514 18:14:01.144498 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.144650 kubelet[2710]: E0514 18:14:01.144635 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.144650 kubelet[2710]: W0514 18:14:01.144646 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.144695 kubelet[2710]: E0514 18:14:01.144660 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.144862 kubelet[2710]: E0514 18:14:01.144847 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.144862 kubelet[2710]: W0514 18:14:01.144857 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.144908 kubelet[2710]: E0514 18:14:01.144870 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.145245 kubelet[2710]: E0514 18:14:01.145217 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.145245 kubelet[2710]: W0514 18:14:01.145235 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.145245 kubelet[2710]: E0514 18:14:01.145254 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.145458 kubelet[2710]: E0514 18:14:01.145437 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.145458 kubelet[2710]: W0514 18:14:01.145451 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.145554 kubelet[2710]: E0514 18:14:01.145466 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.145672 kubelet[2710]: E0514 18:14:01.145658 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.145672 kubelet[2710]: W0514 18:14:01.145668 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.145733 kubelet[2710]: E0514 18:14:01.145682 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.145886 kubelet[2710]: E0514 18:14:01.145865 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.145886 kubelet[2710]: W0514 18:14:01.145880 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.145990 kubelet[2710]: E0514 18:14:01.145969 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.146251 kubelet[2710]: E0514 18:14:01.146072 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.146251 kubelet[2710]: W0514 18:14:01.146100 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.146251 kubelet[2710]: E0514 18:14:01.146134 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.146618 kubelet[2710]: E0514 18:14:01.146298 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.146618 kubelet[2710]: W0514 18:14:01.146307 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.146618 kubelet[2710]: E0514 18:14:01.146331 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.146618 kubelet[2710]: E0514 18:14:01.146533 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.146618 kubelet[2710]: W0514 18:14:01.146544 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.146618 kubelet[2710]: E0514 18:14:01.146560 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.147506 kubelet[2710]: E0514 18:14:01.146760 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.147506 kubelet[2710]: W0514 18:14:01.146768 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.147506 kubelet[2710]: E0514 18:14:01.146783 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.147506 kubelet[2710]: E0514 18:14:01.147019 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.147506 kubelet[2710]: W0514 18:14:01.147026 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.147506 kubelet[2710]: E0514 18:14:01.147044 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.147506 kubelet[2710]: E0514 18:14:01.147261 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.147506 kubelet[2710]: W0514 18:14:01.147268 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.147506 kubelet[2710]: E0514 18:14:01.147282 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.147776 kubelet[2710]: E0514 18:14:01.147561 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.147776 kubelet[2710]: W0514 18:14:01.147569 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.147776 kubelet[2710]: E0514 18:14:01.147583 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.147866 kubelet[2710]: E0514 18:14:01.147856 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.147897 kubelet[2710]: W0514 18:14:01.147867 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.147897 kubelet[2710]: E0514 18:14:01.147886 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.148129 kubelet[2710]: E0514 18:14:01.148077 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.148129 kubelet[2710]: W0514 18:14:01.148112 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.148129 kubelet[2710]: E0514 18:14:01.148122 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.152503 kubelet[2710]: E0514 18:14:01.152480 2710 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:14:01.152503 kubelet[2710]: W0514 18:14:01.152495 2710 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:14:01.152503 kubelet[2710]: E0514 18:14:01.152506 2710 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:14:01.332556 containerd[1587]: time="2025-05-14T18:14:01.331302374Z" level=info msg="connecting to shim 25ace2c583f4a6a874f6705a8fc911f3270b30893a19fdf6490759c7058ae053" address="unix:///run/containerd/s/c91550ece693f4b6a1f12ed667ee2715a7ab4b17e63a41045e45784b3cd21629" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:01.336467 containerd[1587]: time="2025-05-14T18:14:01.336291772Z" level=info msg="connecting to shim 69f3f6a44b1cbe653bdf680430a8bfd3bcfe26624b89b0346e37f8717f907497" address="unix:///run/containerd/s/71c4fe007ce463cb1aaccc429ea14d06ba545991941df2f34f9fddbf6de84eb0" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:01.363318 systemd[1]: Started cri-containerd-25ace2c583f4a6a874f6705a8fc911f3270b30893a19fdf6490759c7058ae053.scope - libcontainer container 25ace2c583f4a6a874f6705a8fc911f3270b30893a19fdf6490759c7058ae053. May 14 18:14:01.367917 systemd[1]: Started cri-containerd-69f3f6a44b1cbe653bdf680430a8bfd3bcfe26624b89b0346e37f8717f907497.scope - libcontainer container 69f3f6a44b1cbe653bdf680430a8bfd3bcfe26624b89b0346e37f8717f907497. May 14 18:14:01.435529 containerd[1587]: time="2025-05-14T18:14:01.435487935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xljdr,Uid:24f00d4a-28ce-4a29-b47b-b0fedd40d4a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"25ace2c583f4a6a874f6705a8fc911f3270b30893a19fdf6490759c7058ae053\"" May 14 18:14:01.436836 containerd[1587]: time="2025-05-14T18:14:01.436780405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86f787b574-knxvz,Uid:299337de-a7c9-4826-a8cb-0ff4ecc852c2,Namespace:calico-system,Attempt:0,} returns sandbox id \"69f3f6a44b1cbe653bdf680430a8bfd3bcfe26624b89b0346e37f8717f907497\"" May 14 18:14:01.437306 containerd[1587]: time="2025-05-14T18:14:01.437229557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 18:14:02.920922 containerd[1587]: time="2025-05-14T18:14:02.920846542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:02.921567 containerd[1587]: time="2025-05-14T18:14:02.921514629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 14 18:14:02.922708 containerd[1587]: time="2025-05-14T18:14:02.922661851Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:02.924475 containerd[1587]: time="2025-05-14T18:14:02.924419762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:02.924829 containerd[1587]: time="2025-05-14T18:14:02.924787669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.487520992s" May 14 18:14:02.924829 containerd[1587]: time="2025-05-14T18:14:02.924817635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 14 18:14:02.925926 containerd[1587]: time="2025-05-14T18:14:02.925771984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 18:14:02.927287 containerd[1587]: time="2025-05-14T18:14:02.927241318Z" level=info msg="CreateContainer within sandbox \"25ace2c583f4a6a874f6705a8fc911f3270b30893a19fdf6490759c7058ae053\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 18:14:02.936736 containerd[1587]: time="2025-05-14T18:14:02.936682485Z" level=info msg="Container c7728896798ca1ec06081bd85cbdc07d3adee00c5dc51cc92ddb3c92fe0ed8ad: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:02.945934 containerd[1587]: time="2025-05-14T18:14:02.945874249Z" level=info msg="CreateContainer within sandbox \"25ace2c583f4a6a874f6705a8fc911f3270b30893a19fdf6490759c7058ae053\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c7728896798ca1ec06081bd85cbdc07d3adee00c5dc51cc92ddb3c92fe0ed8ad\"" May 14 18:14:02.946526 containerd[1587]: time="2025-05-14T18:14:02.946480277Z" level=info msg="StartContainer for \"c7728896798ca1ec06081bd85cbdc07d3adee00c5dc51cc92ddb3c92fe0ed8ad\"" May 14 18:14:02.951859 containerd[1587]: time="2025-05-14T18:14:02.951808642Z" level=info msg="connecting to shim c7728896798ca1ec06081bd85cbdc07d3adee00c5dc51cc92ddb3c92fe0ed8ad" address="unix:///run/containerd/s/c91550ece693f4b6a1f12ed667ee2715a7ab4b17e63a41045e45784b3cd21629" protocol=ttrpc version=3 May 14 18:14:02.976237 systemd[1]: Started cri-containerd-c7728896798ca1ec06081bd85cbdc07d3adee00c5dc51cc92ddb3c92fe0ed8ad.scope - libcontainer container c7728896798ca1ec06081bd85cbdc07d3adee00c5dc51cc92ddb3c92fe0ed8ad. May 14 18:14:03.020473 containerd[1587]: time="2025-05-14T18:14:03.020386394Z" level=info msg="StartContainer for \"c7728896798ca1ec06081bd85cbdc07d3adee00c5dc51cc92ddb3c92fe0ed8ad\" returns successfully" May 14 18:14:03.036467 systemd[1]: cri-containerd-c7728896798ca1ec06081bd85cbdc07d3adee00c5dc51cc92ddb3c92fe0ed8ad.scope: Deactivated successfully. May 14 18:14:03.036792 systemd[1]: cri-containerd-c7728896798ca1ec06081bd85cbdc07d3adee00c5dc51cc92ddb3c92fe0ed8ad.scope: Consumed 40ms CPU time, 8.3M memory peak, 4.5M written to disk. May 14 18:14:03.038246 containerd[1587]: time="2025-05-14T18:14:03.038210153Z" level=info msg="received exit event container_id:\"c7728896798ca1ec06081bd85cbdc07d3adee00c5dc51cc92ddb3c92fe0ed8ad\" id:\"c7728896798ca1ec06081bd85cbdc07d3adee00c5dc51cc92ddb3c92fe0ed8ad\" pid:3288 exited_at:{seconds:1747246443 nanos:37801549}" May 14 18:14:03.038347 containerd[1587]: time="2025-05-14T18:14:03.038307176Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c7728896798ca1ec06081bd85cbdc07d3adee00c5dc51cc92ddb3c92fe0ed8ad\" id:\"c7728896798ca1ec06081bd85cbdc07d3adee00c5dc51cc92ddb3c92fe0ed8ad\" pid:3288 exited_at:{seconds:1747246443 nanos:37801549}" May 14 18:14:03.061645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7728896798ca1ec06081bd85cbdc07d3adee00c5dc51cc92ddb3c92fe0ed8ad-rootfs.mount: Deactivated successfully. May 14 18:14:03.067196 update_engine[1576]: I20250514 18:14:03.067127 1576 update_attempter.cc:509] Updating boot flags... May 14 18:14:03.378204 kubelet[2710]: E0514 18:14:03.378165 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7b4g" podUID="d1e02241-9d67-43e1-bd15-f6a1549c9972" May 14 18:14:05.378427 kubelet[2710]: E0514 18:14:05.378365 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7b4g" podUID="d1e02241-9d67-43e1-bd15-f6a1549c9972" May 14 18:14:06.472962 containerd[1587]: time="2025-05-14T18:14:06.472907887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:06.473754 containerd[1587]: time="2025-05-14T18:14:06.473724901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 14 18:14:06.474911 containerd[1587]: time="2025-05-14T18:14:06.474874064Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:06.476726 containerd[1587]: time="2025-05-14T18:14:06.476701799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:06.477259 containerd[1587]: time="2025-05-14T18:14:06.477222643Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.551426865s" May 14 18:14:06.477294 containerd[1587]: time="2025-05-14T18:14:06.477256508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 14 18:14:06.478357 containerd[1587]: time="2025-05-14T18:14:06.477949247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 18:14:06.484791 containerd[1587]: time="2025-05-14T18:14:06.484747539Z" level=info msg="CreateContainer within sandbox \"69f3f6a44b1cbe653bdf680430a8bfd3bcfe26624b89b0346e37f8717f907497\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 18:14:06.494329 containerd[1587]: time="2025-05-14T18:14:06.493459238Z" level=info msg="Container 097eca4916571e57119f388633c113785cfbb353665330987f612799a4cde0a9: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:06.503080 containerd[1587]: time="2025-05-14T18:14:06.503041302Z" level=info msg="CreateContainer within sandbox \"69f3f6a44b1cbe653bdf680430a8bfd3bcfe26624b89b0346e37f8717f907497\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"097eca4916571e57119f388633c113785cfbb353665330987f612799a4cde0a9\"" May 14 18:14:06.503558 containerd[1587]: time="2025-05-14T18:14:06.503539644Z" level=info msg="StartContainer for \"097eca4916571e57119f388633c113785cfbb353665330987f612799a4cde0a9\"" May 14 18:14:06.504506 containerd[1587]: time="2025-05-14T18:14:06.504469502Z" level=info msg="connecting to shim 097eca4916571e57119f388633c113785cfbb353665330987f612799a4cde0a9" address="unix:///run/containerd/s/71c4fe007ce463cb1aaccc429ea14d06ba545991941df2f34f9fddbf6de84eb0" protocol=ttrpc version=3 May 14 18:14:06.526242 systemd[1]: Started cri-containerd-097eca4916571e57119f388633c113785cfbb353665330987f612799a4cde0a9.scope - libcontainer container 097eca4916571e57119f388633c113785cfbb353665330987f612799a4cde0a9. May 14 18:14:06.580564 containerd[1587]: time="2025-05-14T18:14:06.580516443Z" level=info msg="StartContainer for \"097eca4916571e57119f388633c113785cfbb353665330987f612799a4cde0a9\" returns successfully" May 14 18:14:07.378462 kubelet[2710]: E0514 18:14:07.378411 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7b4g" podUID="d1e02241-9d67-43e1-bd15-f6a1549c9972" May 14 18:14:07.468832 kubelet[2710]: I0514 18:14:07.468557 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86f787b574-knxvz" podStartSLOduration=2.428869729 podStartE2EDuration="7.468542606s" podCreationTimestamp="2025-05-14 18:14:00 +0000 UTC" firstStartedPulling="2025-05-14 18:14:01.438130174 +0000 UTC m=+12.141223460" lastFinishedPulling="2025-05-14 18:14:06.477803051 +0000 UTC m=+17.180896337" observedRunningTime="2025-05-14 18:14:07.460235429 +0000 UTC m=+18.163328735" watchObservedRunningTime="2025-05-14 18:14:07.468542606 +0000 UTC m=+18.171635892" May 14 18:14:09.378383 kubelet[2710]: E0514 18:14:09.378343 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7b4g" podUID="d1e02241-9d67-43e1-bd15-f6a1549c9972" May 14 18:14:11.378341 kubelet[2710]: E0514 18:14:11.378289 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7b4g" podUID="d1e02241-9d67-43e1-bd15-f6a1549c9972" May 14 18:14:11.566467 containerd[1587]: time="2025-05-14T18:14:11.566415052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:11.567210 containerd[1587]: time="2025-05-14T18:14:11.567180396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 14 18:14:11.568228 containerd[1587]: time="2025-05-14T18:14:11.568193046Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:11.570114 containerd[1587]: time="2025-05-14T18:14:11.570057503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:11.570593 containerd[1587]: time="2025-05-14T18:14:11.570562396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.092584605s" May 14 18:14:11.570593 containerd[1587]: time="2025-05-14T18:14:11.570588054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 14 18:14:11.575752 containerd[1587]: time="2025-05-14T18:14:11.575705809Z" level=info msg="CreateContainer within sandbox \"25ace2c583f4a6a874f6705a8fc911f3270b30893a19fdf6490759c7058ae053\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 18:14:11.585261 containerd[1587]: time="2025-05-14T18:14:11.585224517Z" level=info msg="Container 6bbbbc660669f5e458cfc9b9c10442a9c87c01947545acf95ae107df1f9b74eb: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:11.594673 containerd[1587]: time="2025-05-14T18:14:11.594626814Z" level=info msg="CreateContainer within sandbox \"25ace2c583f4a6a874f6705a8fc911f3270b30893a19fdf6490759c7058ae053\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6bbbbc660669f5e458cfc9b9c10442a9c87c01947545acf95ae107df1f9b74eb\"" May 14 18:14:11.595236 containerd[1587]: time="2025-05-14T18:14:11.595201499Z" level=info msg="StartContainer for \"6bbbbc660669f5e458cfc9b9c10442a9c87c01947545acf95ae107df1f9b74eb\"" May 14 18:14:11.596783 containerd[1587]: time="2025-05-14T18:14:11.596755942Z" level=info msg="connecting to shim 6bbbbc660669f5e458cfc9b9c10442a9c87c01947545acf95ae107df1f9b74eb" address="unix:///run/containerd/s/c91550ece693f4b6a1f12ed667ee2715a7ab4b17e63a41045e45784b3cd21629" protocol=ttrpc version=3 May 14 18:14:11.619223 systemd[1]: Started cri-containerd-6bbbbc660669f5e458cfc9b9c10442a9c87c01947545acf95ae107df1f9b74eb.scope - libcontainer container 6bbbbc660669f5e458cfc9b9c10442a9c87c01947545acf95ae107df1f9b74eb. May 14 18:14:11.662390 containerd[1587]: time="2025-05-14T18:14:11.662271441Z" level=info msg="StartContainer for \"6bbbbc660669f5e458cfc9b9c10442a9c87c01947545acf95ae107df1f9b74eb\" returns successfully" May 14 18:14:12.503509 containerd[1587]: time="2025-05-14T18:14:12.503459511Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 18:14:12.506461 systemd[1]: cri-containerd-6bbbbc660669f5e458cfc9b9c10442a9c87c01947545acf95ae107df1f9b74eb.scope: Deactivated successfully. May 14 18:14:12.506858 systemd[1]: cri-containerd-6bbbbc660669f5e458cfc9b9c10442a9c87c01947545acf95ae107df1f9b74eb.scope: Consumed 525ms CPU time, 165.7M memory peak, 8K read from disk, 154M written to disk. May 14 18:14:12.508758 containerd[1587]: time="2025-05-14T18:14:12.508706416Z" level=info msg="received exit event container_id:\"6bbbbc660669f5e458cfc9b9c10442a9c87c01947545acf95ae107df1f9b74eb\" id:\"6bbbbc660669f5e458cfc9b9c10442a9c87c01947545acf95ae107df1f9b74eb\" pid:3411 exited_at:{seconds:1747246452 nanos:508463168}" May 14 18:14:12.508930 containerd[1587]: time="2025-05-14T18:14:12.508855308Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6bbbbc660669f5e458cfc9b9c10442a9c87c01947545acf95ae107df1f9b74eb\" id:\"6bbbbc660669f5e458cfc9b9c10442a9c87c01947545acf95ae107df1f9b74eb\" pid:3411 exited_at:{seconds:1747246452 nanos:508463168}" May 14 18:14:12.532520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bbbbc660669f5e458cfc9b9c10442a9c87c01947545acf95ae107df1f9b74eb-rootfs.mount: Deactivated successfully. May 14 18:14:12.636795 kubelet[2710]: I0514 18:14:12.636725 2710 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 18:14:12.791903 systemd[1]: Created slice kubepods-burstable-pod3271afb2_425a_4096_8efb_26949e13e024.slice - libcontainer container kubepods-burstable-pod3271afb2_425a_4096_8efb_26949e13e024.slice. May 14 18:14:12.801134 systemd[1]: Created slice kubepods-burstable-pod95c6804d_85db_4594_9866_49d32499413f.slice - libcontainer container kubepods-burstable-pod95c6804d_85db_4594_9866_49d32499413f.slice. May 14 18:14:12.805998 systemd[1]: Created slice kubepods-besteffort-pod116be02b_0fa5_4611_968b_497737b4096b.slice - libcontainer container kubepods-besteffort-pod116be02b_0fa5_4611_968b_497737b4096b.slice. May 14 18:14:12.812744 systemd[1]: Created slice kubepods-besteffort-pod33679cdf_7371_433c_839a_afcd75293178.slice - libcontainer container kubepods-besteffort-pod33679cdf_7371_433c_839a_afcd75293178.slice. May 14 18:14:12.820228 systemd[1]: Created slice kubepods-besteffort-pod89850186_46b0_42e5_9218_fe296a67b6e5.slice - libcontainer container kubepods-besteffort-pod89850186_46b0_42e5_9218_fe296a67b6e5.slice. May 14 18:14:12.831774 kubelet[2710]: I0514 18:14:12.831730 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33679cdf-7371-433c-839a-afcd75293178-tigera-ca-bundle\") pod \"calico-kube-controllers-6689cfc69-jvqbr\" (UID: \"33679cdf-7371-433c-839a-afcd75293178\") " pod="calico-system/calico-kube-controllers-6689cfc69-jvqbr" May 14 18:14:12.831774 kubelet[2710]: I0514 18:14:12.831765 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3271afb2-425a-4096-8efb-26949e13e024-config-volume\") pod \"coredns-6f6b679f8f-tfcbv\" (UID: \"3271afb2-425a-4096-8efb-26949e13e024\") " pod="kube-system/coredns-6f6b679f8f-tfcbv" May 14 18:14:12.831774 kubelet[2710]: I0514 18:14:12.831783 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xflq4\" (UniqueName: \"kubernetes.io/projected/3271afb2-425a-4096-8efb-26949e13e024-kube-api-access-xflq4\") pod \"coredns-6f6b679f8f-tfcbv\" (UID: \"3271afb2-425a-4096-8efb-26949e13e024\") " pod="kube-system/coredns-6f6b679f8f-tfcbv" May 14 18:14:12.831950 kubelet[2710]: I0514 18:14:12.831800 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhn8d\" (UniqueName: \"kubernetes.io/projected/116be02b-0fa5-4611-968b-497737b4096b-kube-api-access-hhn8d\") pod \"calico-apiserver-59f6479cb6-qfqb6\" (UID: \"116be02b-0fa5-4611-968b-497737b4096b\") " pod="calico-apiserver/calico-apiserver-59f6479cb6-qfqb6" May 14 18:14:12.831950 kubelet[2710]: I0514 18:14:12.831825 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c6804d-85db-4594-9866-49d32499413f-config-volume\") pod \"coredns-6f6b679f8f-t68rd\" (UID: \"95c6804d-85db-4594-9866-49d32499413f\") " pod="kube-system/coredns-6f6b679f8f-t68rd" May 14 18:14:12.831950 kubelet[2710]: I0514 18:14:12.831887 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/89850186-46b0-42e5-9218-fe296a67b6e5-calico-apiserver-certs\") pod \"calico-apiserver-59f6479cb6-z6rzb\" (UID: \"89850186-46b0-42e5-9218-fe296a67b6e5\") " pod="calico-apiserver/calico-apiserver-59f6479cb6-z6rzb" May 14 18:14:12.831950 kubelet[2710]: I0514 18:14:12.831943 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/116be02b-0fa5-4611-968b-497737b4096b-calico-apiserver-certs\") pod \"calico-apiserver-59f6479cb6-qfqb6\" (UID: \"116be02b-0fa5-4611-968b-497737b4096b\") " pod="calico-apiserver/calico-apiserver-59f6479cb6-qfqb6" May 14 18:14:12.832059 kubelet[2710]: I0514 18:14:12.831963 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmqcr\" (UniqueName: \"kubernetes.io/projected/33679cdf-7371-433c-839a-afcd75293178-kube-api-access-bmqcr\") pod \"calico-kube-controllers-6689cfc69-jvqbr\" (UID: \"33679cdf-7371-433c-839a-afcd75293178\") " pod="calico-system/calico-kube-controllers-6689cfc69-jvqbr" May 14 18:14:12.832059 kubelet[2710]: I0514 18:14:12.831991 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zmng\" (UniqueName: \"kubernetes.io/projected/95c6804d-85db-4594-9866-49d32499413f-kube-api-access-4zmng\") pod \"coredns-6f6b679f8f-t68rd\" (UID: \"95c6804d-85db-4594-9866-49d32499413f\") " pod="kube-system/coredns-6f6b679f8f-t68rd" May 14 18:14:12.832059 kubelet[2710]: I0514 18:14:12.832007 2710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b29rx\" (UniqueName: \"kubernetes.io/projected/89850186-46b0-42e5-9218-fe296a67b6e5-kube-api-access-b29rx\") pod \"calico-apiserver-59f6479cb6-z6rzb\" (UID: \"89850186-46b0-42e5-9218-fe296a67b6e5\") " pod="calico-apiserver/calico-apiserver-59f6479cb6-z6rzb" May 14 18:14:13.098364 containerd[1587]: time="2025-05-14T18:14:13.098227908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tfcbv,Uid:3271afb2-425a-4096-8efb-26949e13e024,Namespace:kube-system,Attempt:0,}" May 14 18:14:13.104840 containerd[1587]: time="2025-05-14T18:14:13.104784490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t68rd,Uid:95c6804d-85db-4594-9866-49d32499413f,Namespace:kube-system,Attempt:0,}" May 14 18:14:13.110270 containerd[1587]: time="2025-05-14T18:14:13.110241437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6479cb6-qfqb6,Uid:116be02b-0fa5-4611-968b-497737b4096b,Namespace:calico-apiserver,Attempt:0,}" May 14 18:14:13.116920 containerd[1587]: time="2025-05-14T18:14:13.116884162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6689cfc69-jvqbr,Uid:33679cdf-7371-433c-839a-afcd75293178,Namespace:calico-system,Attempt:0,}" May 14 18:14:13.124394 containerd[1587]: time="2025-05-14T18:14:13.124357441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6479cb6-z6rzb,Uid:89850186-46b0-42e5-9218-fe296a67b6e5,Namespace:calico-apiserver,Attempt:0,}" May 14 18:14:13.218786 containerd[1587]: time="2025-05-14T18:14:13.218718854Z" level=error msg="Failed to destroy network for sandbox \"e30d0fc838263fad23609b3f50a4dd25caf3eff1cbbc9fef8002234bc48d1fc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.221201 containerd[1587]: time="2025-05-14T18:14:13.221142304Z" level=error msg="Failed to destroy network for sandbox \"ead4a1c8bac97cc1e9e9c720be8f398dacd69fc403b8914a1670d6b9b68a3568\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.221352 containerd[1587]: time="2025-05-14T18:14:13.221324126Z" level=error msg="Failed to destroy network for sandbox \"0b324fef6d9b25348bad7cfa2bddb6d4d04058cc2df751a93c50876ad3823985\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.222259 containerd[1587]: time="2025-05-14T18:14:13.222220596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tfcbv,Uid:3271afb2-425a-4096-8efb-26949e13e024,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e30d0fc838263fad23609b3f50a4dd25caf3eff1cbbc9fef8002234bc48d1fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.223001 kubelet[2710]: E0514 18:14:13.222941 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e30d0fc838263fad23609b3f50a4dd25caf3eff1cbbc9fef8002234bc48d1fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.223056 kubelet[2710]: E0514 18:14:13.223036 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e30d0fc838263fad23609b3f50a4dd25caf3eff1cbbc9fef8002234bc48d1fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-tfcbv" May 14 18:14:13.223108 kubelet[2710]: E0514 18:14:13.223057 2710 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e30d0fc838263fad23609b3f50a4dd25caf3eff1cbbc9fef8002234bc48d1fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-tfcbv" May 14 18:14:13.223253 kubelet[2710]: E0514 18:14:13.223226 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-tfcbv_kube-system(3271afb2-425a-4096-8efb-26949e13e024)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-tfcbv_kube-system(3271afb2-425a-4096-8efb-26949e13e024)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e30d0fc838263fad23609b3f50a4dd25caf3eff1cbbc9fef8002234bc48d1fc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-tfcbv" podUID="3271afb2-425a-4096-8efb-26949e13e024" May 14 18:14:13.225574 containerd[1587]: time="2025-05-14T18:14:13.225540946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t68rd,Uid:95c6804d-85db-4594-9866-49d32499413f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b324fef6d9b25348bad7cfa2bddb6d4d04058cc2df751a93c50876ad3823985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.225979 kubelet[2710]: E0514 18:14:13.225958 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b324fef6d9b25348bad7cfa2bddb6d4d04058cc2df751a93c50876ad3823985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.226593 kubelet[2710]: E0514 18:14:13.226519 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b324fef6d9b25348bad7cfa2bddb6d4d04058cc2df751a93c50876ad3823985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t68rd" May 14 18:14:13.227346 kubelet[2710]: E0514 18:14:13.226562 2710 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b324fef6d9b25348bad7cfa2bddb6d4d04058cc2df751a93c50876ad3823985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-t68rd" May 14 18:14:13.227346 kubelet[2710]: E0514 18:14:13.226814 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-t68rd_kube-system(95c6804d-85db-4594-9866-49d32499413f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-t68rd_kube-system(95c6804d-85db-4594-9866-49d32499413f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b324fef6d9b25348bad7cfa2bddb6d4d04058cc2df751a93c50876ad3823985\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-t68rd" podUID="95c6804d-85db-4594-9866-49d32499413f" May 14 18:14:13.228703 containerd[1587]: time="2025-05-14T18:14:13.228637365Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6479cb6-qfqb6,Uid:116be02b-0fa5-4611-968b-497737b4096b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead4a1c8bac97cc1e9e9c720be8f398dacd69fc403b8914a1670d6b9b68a3568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.228957 kubelet[2710]: E0514 18:14:13.228921 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead4a1c8bac97cc1e9e9c720be8f398dacd69fc403b8914a1670d6b9b68a3568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.229065 kubelet[2710]: E0514 18:14:13.229050 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead4a1c8bac97cc1e9e9c720be8f398dacd69fc403b8914a1670d6b9b68a3568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f6479cb6-qfqb6" May 14 18:14:13.229243 kubelet[2710]: E0514 18:14:13.229121 2710 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead4a1c8bac97cc1e9e9c720be8f398dacd69fc403b8914a1670d6b9b68a3568\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f6479cb6-qfqb6" May 14 18:14:13.229947 kubelet[2710]: E0514 18:14:13.229910 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59f6479cb6-qfqb6_calico-apiserver(116be02b-0fa5-4611-968b-497737b4096b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59f6479cb6-qfqb6_calico-apiserver(116be02b-0fa5-4611-968b-497737b4096b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ead4a1c8bac97cc1e9e9c720be8f398dacd69fc403b8914a1670d6b9b68a3568\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f6479cb6-qfqb6" podUID="116be02b-0fa5-4611-968b-497737b4096b" May 14 18:14:13.234450 containerd[1587]: time="2025-05-14T18:14:13.234401592Z" level=error msg="Failed to destroy network for sandbox \"80669e944c5b7096ace517c9ce2d74b594adffa6274b4c93065ddb34eb89bfdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.236279 containerd[1587]: time="2025-05-14T18:14:13.236147223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6479cb6-z6rzb,Uid:89850186-46b0-42e5-9218-fe296a67b6e5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"80669e944c5b7096ace517c9ce2d74b594adffa6274b4c93065ddb34eb89bfdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.236502 kubelet[2710]: E0514 18:14:13.236454 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80669e944c5b7096ace517c9ce2d74b594adffa6274b4c93065ddb34eb89bfdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.236550 kubelet[2710]: E0514 18:14:13.236525 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80669e944c5b7096ace517c9ce2d74b594adffa6274b4c93065ddb34eb89bfdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f6479cb6-z6rzb" May 14 18:14:13.236589 kubelet[2710]: E0514 18:14:13.236546 2710 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80669e944c5b7096ace517c9ce2d74b594adffa6274b4c93065ddb34eb89bfdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f6479cb6-z6rzb" May 14 18:14:13.236620 kubelet[2710]: E0514 18:14:13.236591 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59f6479cb6-z6rzb_calico-apiserver(89850186-46b0-42e5-9218-fe296a67b6e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59f6479cb6-z6rzb_calico-apiserver(89850186-46b0-42e5-9218-fe296a67b6e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80669e944c5b7096ace517c9ce2d74b594adffa6274b4c93065ddb34eb89bfdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f6479cb6-z6rzb" podUID="89850186-46b0-42e5-9218-fe296a67b6e5" May 14 18:14:13.238714 containerd[1587]: time="2025-05-14T18:14:13.238614114Z" level=error msg="Failed to destroy network for sandbox \"df0491afdd56b00545a7ba9911853fd60a3805c7334aca66602a7e80043748c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.240624 containerd[1587]: time="2025-05-14T18:14:13.240586853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6689cfc69-jvqbr,Uid:33679cdf-7371-433c-839a-afcd75293178,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0491afdd56b00545a7ba9911853fd60a3805c7334aca66602a7e80043748c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.240778 kubelet[2710]: E0514 18:14:13.240744 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0491afdd56b00545a7ba9911853fd60a3805c7334aca66602a7e80043748c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.240828 kubelet[2710]: E0514 18:14:13.240784 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0491afdd56b00545a7ba9911853fd60a3805c7334aca66602a7e80043748c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6689cfc69-jvqbr" May 14 18:14:13.240828 kubelet[2710]: E0514 18:14:13.240800 2710 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df0491afdd56b00545a7ba9911853fd60a3805c7334aca66602a7e80043748c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6689cfc69-jvqbr" May 14 18:14:13.240897 kubelet[2710]: E0514 18:14:13.240834 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6689cfc69-jvqbr_calico-system(33679cdf-7371-433c-839a-afcd75293178)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6689cfc69-jvqbr_calico-system(33679cdf-7371-433c-839a-afcd75293178)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df0491afdd56b00545a7ba9911853fd60a3805c7334aca66602a7e80043748c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6689cfc69-jvqbr" podUID="33679cdf-7371-433c-839a-afcd75293178" May 14 18:14:13.383879 systemd[1]: Created slice kubepods-besteffort-podd1e02241_9d67_43e1_bd15_f6a1549c9972.slice - libcontainer container kubepods-besteffort-podd1e02241_9d67_43e1_bd15_f6a1549c9972.slice. May 14 18:14:13.389205 containerd[1587]: time="2025-05-14T18:14:13.389149484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7b4g,Uid:d1e02241-9d67-43e1-bd15-f6a1549c9972,Namespace:calico-system,Attempt:0,}" May 14 18:14:13.438213 containerd[1587]: time="2025-05-14T18:14:13.438160306Z" level=error msg="Failed to destroy network for sandbox \"b29f0b30e5a3bba749eeb078fdf598d9781dad127cc6e8fccd6d9d2b690498e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.439701 containerd[1587]: time="2025-05-14T18:14:13.439667748Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7b4g,Uid:d1e02241-9d67-43e1-bd15-f6a1549c9972,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29f0b30e5a3bba749eeb078fdf598d9781dad127cc6e8fccd6d9d2b690498e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.439887 kubelet[2710]: E0514 18:14:13.439858 2710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29f0b30e5a3bba749eeb078fdf598d9781dad127cc6e8fccd6d9d2b690498e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:14:13.439959 kubelet[2710]: E0514 18:14:13.439903 2710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29f0b30e5a3bba749eeb078fdf598d9781dad127cc6e8fccd6d9d2b690498e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7b4g" May 14 18:14:13.439959 kubelet[2710]: E0514 18:14:13.439920 2710 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b29f0b30e5a3bba749eeb078fdf598d9781dad127cc6e8fccd6d9d2b690498e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7b4g" May 14 18:14:13.440018 kubelet[2710]: E0514 18:14:13.439959 2710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7b4g_calico-system(d1e02241-9d67-43e1-bd15-f6a1549c9972)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7b4g_calico-system(d1e02241-9d67-43e1-bd15-f6a1549c9972)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b29f0b30e5a3bba749eeb078fdf598d9781dad127cc6e8fccd6d9d2b690498e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7b4g" podUID="d1e02241-9d67-43e1-bd15-f6a1549c9972" May 14 18:14:13.468791 containerd[1587]: time="2025-05-14T18:14:13.468757984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 18:14:13.938185 systemd[1]: run-netns-cni\x2df097ebbb\x2df826\x2d3cf4\x2d6954\x2d1f2880a2b138.mount: Deactivated successfully. May 14 18:14:17.458555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117172148.mount: Deactivated successfully. May 14 18:14:19.067506 containerd[1587]: time="2025-05-14T18:14:19.067433393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:19.068375 containerd[1587]: time="2025-05-14T18:14:19.068345490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 14 18:14:19.069760 containerd[1587]: time="2025-05-14T18:14:19.069720789Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:19.071849 containerd[1587]: time="2025-05-14T18:14:19.071816584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:19.072497 containerd[1587]: time="2025-05-14T18:14:19.072466418Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 5.603670182s" May 14 18:14:19.072558 containerd[1587]: time="2025-05-14T18:14:19.072498087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 14 18:14:19.081112 containerd[1587]: time="2025-05-14T18:14:19.080978791Z" level=info msg="CreateContainer within sandbox \"25ace2c583f4a6a874f6705a8fc911f3270b30893a19fdf6490759c7058ae053\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 18:14:19.093955 containerd[1587]: time="2025-05-14T18:14:19.093907680Z" level=info msg="Container 1259399140a00d9ba01f0670c2310a01cbc5d52ce41bb5c927ef582af33914ce: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:19.105475 containerd[1587]: time="2025-05-14T18:14:19.105443966Z" level=info msg="CreateContainer within sandbox \"25ace2c583f4a6a874f6705a8fc911f3270b30893a19fdf6490759c7058ae053\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1259399140a00d9ba01f0670c2310a01cbc5d52ce41bb5c927ef582af33914ce\"" May 14 18:14:19.106142 containerd[1587]: time="2025-05-14T18:14:19.105908641Z" level=info msg="StartContainer for \"1259399140a00d9ba01f0670c2310a01cbc5d52ce41bb5c927ef582af33914ce\"" May 14 18:14:19.107239 containerd[1587]: time="2025-05-14T18:14:19.107211213Z" level=info msg="connecting to shim 1259399140a00d9ba01f0670c2310a01cbc5d52ce41bb5c927ef582af33914ce" address="unix:///run/containerd/s/c91550ece693f4b6a1f12ed667ee2715a7ab4b17e63a41045e45784b3cd21629" protocol=ttrpc version=3 May 14 18:14:19.134286 systemd[1]: Started cri-containerd-1259399140a00d9ba01f0670c2310a01cbc5d52ce41bb5c927ef582af33914ce.scope - libcontainer container 1259399140a00d9ba01f0670c2310a01cbc5d52ce41bb5c927ef582af33914ce. May 14 18:14:19.199554 containerd[1587]: time="2025-05-14T18:14:19.199459949Z" level=info msg="StartContainer for \"1259399140a00d9ba01f0670c2310a01cbc5d52ce41bb5c927ef582af33914ce\" returns successfully" May 14 18:14:19.257691 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 18:14:19.257838 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 18:14:19.617616 kubelet[2710]: I0514 18:14:19.617531 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xljdr" podStartSLOduration=1.980927442 podStartE2EDuration="19.617512135s" podCreationTimestamp="2025-05-14 18:14:00 +0000 UTC" firstStartedPulling="2025-05-14 18:14:01.436787469 +0000 UTC m=+12.139880755" lastFinishedPulling="2025-05-14 18:14:19.073372162 +0000 UTC m=+29.776465448" observedRunningTime="2025-05-14 18:14:19.615037165 +0000 UTC m=+30.318130482" watchObservedRunningTime="2025-05-14 18:14:19.617512135 +0000 UTC m=+30.320605421" May 14 18:14:19.789990 systemd[1]: Started sshd@7-10.0.0.145:22-10.0.0.1:43954.service - OpenSSH per-connection server daemon (10.0.0.1:43954). May 14 18:14:19.843980 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 43954 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:14:19.846236 sshd-session[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:19.851833 systemd-logind[1574]: New session 8 of user core. May 14 18:14:19.856344 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 18:14:20.021178 sshd[3739]: Connection closed by 10.0.0.1 port 43954 May 14 18:14:20.021446 sshd-session[3735]: pam_unix(sshd:session): session closed for user core May 14 18:14:20.026343 systemd[1]: sshd@7-10.0.0.145:22-10.0.0.1:43954.service: Deactivated successfully. May 14 18:14:20.028805 systemd[1]: session-8.scope: Deactivated successfully. May 14 18:14:20.029768 systemd-logind[1574]: Session 8 logged out. Waiting for processes to exit. May 14 18:14:20.031526 systemd-logind[1574]: Removed session 8. May 14 18:14:20.484194 kubelet[2710]: I0514 18:14:20.484156 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:14:21.725221 systemd-networkd[1518]: vxlan.calico: Link UP May 14 18:14:21.725230 systemd-networkd[1518]: vxlan.calico: Gained carrier May 14 18:14:22.099188 kubelet[2710]: I0514 18:14:22.099047 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:14:22.213633 containerd[1587]: time="2025-05-14T18:14:22.213578167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1259399140a00d9ba01f0670c2310a01cbc5d52ce41bb5c927ef582af33914ce\" id:\"63edb8c955b414cf775ad87ac77e362d4e85a5c1384041633d7796e56f245867\" pid:3974 exit_status:1 exited_at:{seconds:1747246462 nanos:205362790}" May 14 18:14:22.317810 containerd[1587]: time="2025-05-14T18:14:22.317745970Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1259399140a00d9ba01f0670c2310a01cbc5d52ce41bb5c927ef582af33914ce\" id:\"f98f3ac0fb79a551b7416ab38efa8971fe7b65a0927979df516c3f04ab61ef82\" pid:3997 exit_status:1 exited_at:{seconds:1747246462 nanos:317453258}" May 14 18:14:23.379258 containerd[1587]: time="2025-05-14T18:14:23.379203803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6689cfc69-jvqbr,Uid:33679cdf-7371-433c-839a-afcd75293178,Namespace:calico-system,Attempt:0,}" May 14 18:14:23.544244 systemd-networkd[1518]: vxlan.calico: Gained IPv6LL May 14 18:14:23.830653 systemd-networkd[1518]: cali97984892c29: Link UP May 14 18:14:23.831642 systemd-networkd[1518]: cali97984892c29: Gained carrier May 14 18:14:23.845339 containerd[1587]: 2025-05-14 18:14:23.712 [INFO][4013] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6689cfc69--jvqbr-eth0 calico-kube-controllers-6689cfc69- calico-system 33679cdf-7371-433c-839a-afcd75293178 675 0 2025-05-14 18:14:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6689cfc69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6689cfc69-jvqbr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali97984892c29 [] []}} ContainerID="74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" Namespace="calico-system" Pod="calico-kube-controllers-6689cfc69-jvqbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689cfc69--jvqbr-" May 14 18:14:23.845339 containerd[1587]: 2025-05-14 18:14:23.712 [INFO][4013] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" Namespace="calico-system" Pod="calico-kube-controllers-6689cfc69-jvqbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689cfc69--jvqbr-eth0" May 14 18:14:23.845339 containerd[1587]: 2025-05-14 18:14:23.793 [INFO][4028] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" HandleID="k8s-pod-network.74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" Workload="localhost-k8s-calico--kube--controllers--6689cfc69--jvqbr-eth0" May 14 18:14:23.845523 containerd[1587]: 2025-05-14 18:14:23.801 [INFO][4028] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" HandleID="k8s-pod-network.74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" Workload="localhost-k8s-calico--kube--controllers--6689cfc69--jvqbr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000240120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6689cfc69-jvqbr", "timestamp":"2025-05-14 18:14:23.792992693 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:14:23.845523 containerd[1587]: 2025-05-14 18:14:23.801 [INFO][4028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:14:23.845523 containerd[1587]: 2025-05-14 18:14:23.801 [INFO][4028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:14:23.845523 containerd[1587]: 2025-05-14 18:14:23.801 [INFO][4028] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:14:23.845523 containerd[1587]: 2025-05-14 18:14:23.803 [INFO][4028] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" host="localhost" May 14 18:14:23.845523 containerd[1587]: 2025-05-14 18:14:23.808 [INFO][4028] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:14:23.845523 containerd[1587]: 2025-05-14 18:14:23.811 [INFO][4028] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:14:23.845523 containerd[1587]: 2025-05-14 18:14:23.813 [INFO][4028] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:14:23.845523 containerd[1587]: 2025-05-14 18:14:23.816 [INFO][4028] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:14:23.845523 containerd[1587]: 2025-05-14 18:14:23.816 [INFO][4028] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" host="localhost" May 14 18:14:23.845800 containerd[1587]: 2025-05-14 18:14:23.817 [INFO][4028] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd May 14 18:14:23.845800 containerd[1587]: 2025-05-14 18:14:23.820 [INFO][4028] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" host="localhost" May 14 18:14:23.845800 containerd[1587]: 2025-05-14 18:14:23.825 [INFO][4028] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" host="localhost" May 14 18:14:23.845800 containerd[1587]: 2025-05-14 18:14:23.825 [INFO][4028] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" host="localhost" May 14 18:14:23.845800 containerd[1587]: 2025-05-14 18:14:23.825 [INFO][4028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:14:23.845800 containerd[1587]: 2025-05-14 18:14:23.825 [INFO][4028] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" HandleID="k8s-pod-network.74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" Workload="localhost-k8s-calico--kube--controllers--6689cfc69--jvqbr-eth0" May 14 18:14:23.845921 containerd[1587]: 2025-05-14 18:14:23.828 [INFO][4013] cni-plugin/k8s.go 386: Populated endpoint ContainerID="74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" Namespace="calico-system" Pod="calico-kube-controllers-6689cfc69-jvqbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689cfc69--jvqbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6689cfc69--jvqbr-eth0", GenerateName:"calico-kube-controllers-6689cfc69-", Namespace:"calico-system", SelfLink:"", UID:"33679cdf-7371-433c-839a-afcd75293178", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6689cfc69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6689cfc69-jvqbr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali97984892c29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:23.845981 containerd[1587]: 2025-05-14 18:14:23.828 [INFO][4013] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" Namespace="calico-system" Pod="calico-kube-controllers-6689cfc69-jvqbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689cfc69--jvqbr-eth0" May 14 18:14:23.845981 containerd[1587]: 2025-05-14 18:14:23.828 [INFO][4013] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97984892c29 ContainerID="74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" Namespace="calico-system" Pod="calico-kube-controllers-6689cfc69-jvqbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689cfc69--jvqbr-eth0" May 14 18:14:23.845981 containerd[1587]: 2025-05-14 18:14:23.831 [INFO][4013] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" Namespace="calico-system" Pod="calico-kube-controllers-6689cfc69-jvqbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689cfc69--jvqbr-eth0" May 14 18:14:23.846044 containerd[1587]: 2025-05-14 18:14:23.832 [INFO][4013] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" Namespace="calico-system" Pod="calico-kube-controllers-6689cfc69-jvqbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689cfc69--jvqbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6689cfc69--jvqbr-eth0", GenerateName:"calico-kube-controllers-6689cfc69-", Namespace:"calico-system", SelfLink:"", UID:"33679cdf-7371-433c-839a-afcd75293178", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6689cfc69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd", Pod:"calico-kube-controllers-6689cfc69-jvqbr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali97984892c29", MAC:"6a:e6:e2:a0:fe:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:23.846114 containerd[1587]: 2025-05-14 18:14:23.840 [INFO][4013] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" Namespace="calico-system" Pod="calico-kube-controllers-6689cfc69-jvqbr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6689cfc69--jvqbr-eth0" May 14 18:14:23.947963 containerd[1587]: time="2025-05-14T18:14:23.947897885Z" level=info msg="connecting to shim 74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd" address="unix:///run/containerd/s/98174d91da2c9099adf7d3b1bd77287f2e0f3ea5f8aaae88342ae34a3976c92b" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:23.973230 systemd[1]: Started cri-containerd-74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd.scope - libcontainer container 74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd. May 14 18:14:23.985247 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:14:24.013251 containerd[1587]: time="2025-05-14T18:14:24.013204781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6689cfc69-jvqbr,Uid:33679cdf-7371-433c-839a-afcd75293178,Namespace:calico-system,Attempt:0,} returns sandbox id \"74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd\"" May 14 18:14:24.014849 containerd[1587]: time="2025-05-14T18:14:24.014820770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 18:14:25.038287 systemd[1]: Started sshd@8-10.0.0.145:22-10.0.0.1:43964.service - OpenSSH per-connection server daemon (10.0.0.1:43964). May 14 18:14:25.101793 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 43964 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:14:25.103822 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:25.131581 systemd-logind[1574]: New session 9 of user core. May 14 18:14:25.146218 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 18:14:25.370451 sshd[4098]: Connection closed by 10.0.0.1 port 43964 May 14 18:14:25.370686 sshd-session[4096]: pam_unix(sshd:session): session closed for user core May 14 18:14:25.374972 systemd[1]: sshd@8-10.0.0.145:22-10.0.0.1:43964.service: Deactivated successfully. May 14 18:14:25.376792 systemd[1]: session-9.scope: Deactivated successfully. May 14 18:14:25.377541 systemd-logind[1574]: Session 9 logged out. Waiting for processes to exit. May 14 18:14:25.378725 systemd-logind[1574]: Removed session 9. May 14 18:14:25.379506 containerd[1587]: time="2025-05-14T18:14:25.379454044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7b4g,Uid:d1e02241-9d67-43e1-bd15-f6a1549c9972,Namespace:calico-system,Attempt:0,}" May 14 18:14:25.380082 containerd[1587]: time="2025-05-14T18:14:25.380033012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t68rd,Uid:95c6804d-85db-4594-9866-49d32499413f,Namespace:kube-system,Attempt:0,}" May 14 18:14:25.479914 systemd-networkd[1518]: calie1907817da2: Link UP May 14 18:14:25.480653 systemd-networkd[1518]: calie1907817da2: Gained carrier May 14 18:14:25.491439 containerd[1587]: 2025-05-14 18:14:25.419 [INFO][4112] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--l7b4g-eth0 csi-node-driver- calico-system d1e02241-9d67-43e1-bd15-f6a1549c9972 582 0 2025-05-14 18:14:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-l7b4g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie1907817da2 [] []}} ContainerID="2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" Namespace="calico-system" Pod="csi-node-driver-l7b4g" WorkloadEndpoint="localhost-k8s-csi--node--driver--l7b4g-" May 14 18:14:25.491439 containerd[1587]: 2025-05-14 18:14:25.419 [INFO][4112] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" Namespace="calico-system" Pod="csi-node-driver-l7b4g" WorkloadEndpoint="localhost-k8s-csi--node--driver--l7b4g-eth0" May 14 18:14:25.491439 containerd[1587]: 2025-05-14 18:14:25.445 [INFO][4141] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" HandleID="k8s-pod-network.2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" Workload="localhost-k8s-csi--node--driver--l7b4g-eth0" May 14 18:14:25.491647 containerd[1587]: 2025-05-14 18:14:25.452 [INFO][4141] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" HandleID="k8s-pod-network.2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" Workload="localhost-k8s-csi--node--driver--l7b4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030a9a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-l7b4g", "timestamp":"2025-05-14 18:14:25.445103841 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:14:25.491647 containerd[1587]: 2025-05-14 18:14:25.452 [INFO][4141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:14:25.491647 containerd[1587]: 2025-05-14 18:14:25.452 [INFO][4141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:14:25.491647 containerd[1587]: 2025-05-14 18:14:25.452 [INFO][4141] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:14:25.491647 containerd[1587]: 2025-05-14 18:14:25.454 [INFO][4141] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" host="localhost" May 14 18:14:25.491647 containerd[1587]: 2025-05-14 18:14:25.457 [INFO][4141] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:14:25.491647 containerd[1587]: 2025-05-14 18:14:25.461 [INFO][4141] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:14:25.491647 containerd[1587]: 2025-05-14 18:14:25.462 [INFO][4141] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:14:25.491647 containerd[1587]: 2025-05-14 18:14:25.464 [INFO][4141] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:14:25.491647 containerd[1587]: 2025-05-14 18:14:25.464 [INFO][4141] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" host="localhost" May 14 18:14:25.491937 containerd[1587]: 2025-05-14 18:14:25.465 [INFO][4141] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da May 14 18:14:25.491937 containerd[1587]: 2025-05-14 18:14:25.469 [INFO][4141] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" host="localhost" May 14 18:14:25.491937 containerd[1587]: 2025-05-14 18:14:25.473 [INFO][4141] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" host="localhost" May 14 18:14:25.491937 containerd[1587]: 2025-05-14 18:14:25.473 [INFO][4141] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" host="localhost" May 14 18:14:25.491937 containerd[1587]: 2025-05-14 18:14:25.473 [INFO][4141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:14:25.491937 containerd[1587]: 2025-05-14 18:14:25.473 [INFO][4141] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" HandleID="k8s-pod-network.2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" Workload="localhost-k8s-csi--node--driver--l7b4g-eth0" May 14 18:14:25.492064 containerd[1587]: 2025-05-14 18:14:25.476 [INFO][4112] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" Namespace="calico-system" Pod="csi-node-driver-l7b4g" WorkloadEndpoint="localhost-k8s-csi--node--driver--l7b4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--l7b4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1e02241-9d67-43e1-bd15-f6a1549c9972", ResourceVersion:"582", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-l7b4g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1907817da2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:25.492064 containerd[1587]: 2025-05-14 18:14:25.476 [INFO][4112] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" Namespace="calico-system" Pod="csi-node-driver-l7b4g" WorkloadEndpoint="localhost-k8s-csi--node--driver--l7b4g-eth0" May 14 18:14:25.492160 containerd[1587]: 2025-05-14 18:14:25.476 [INFO][4112] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1907817da2 ContainerID="2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" Namespace="calico-system" Pod="csi-node-driver-l7b4g" WorkloadEndpoint="localhost-k8s-csi--node--driver--l7b4g-eth0" May 14 18:14:25.492160 containerd[1587]: 2025-05-14 18:14:25.480 [INFO][4112] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" Namespace="calico-system" Pod="csi-node-driver-l7b4g" WorkloadEndpoint="localhost-k8s-csi--node--driver--l7b4g-eth0" May 14 18:14:25.492203 containerd[1587]: 2025-05-14 18:14:25.480 [INFO][4112] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" Namespace="calico-system" Pod="csi-node-driver-l7b4g" WorkloadEndpoint="localhost-k8s-csi--node--driver--l7b4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--l7b4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d1e02241-9d67-43e1-bd15-f6a1549c9972", ResourceVersion:"582", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da", Pod:"csi-node-driver-l7b4g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1907817da2", MAC:"d6:5c:09:a6:09:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:25.492252 containerd[1587]: 2025-05-14 18:14:25.488 [INFO][4112] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" Namespace="calico-system" Pod="csi-node-driver-l7b4g" WorkloadEndpoint="localhost-k8s-csi--node--driver--l7b4g-eth0" May 14 18:14:25.528047 containerd[1587]: time="2025-05-14T18:14:25.527981956Z" level=info msg="connecting to shim 2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da" address="unix:///run/containerd/s/2715b5a81ea0c8bcceab0757af0a9199f87d5a02d656e843392c1de57c7e715d" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:25.554238 systemd[1]: Started cri-containerd-2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da.scope - libcontainer container 2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da. May 14 18:14:25.568150 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:14:25.586637 containerd[1587]: time="2025-05-14T18:14:25.586596492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7b4g,Uid:d1e02241-9d67-43e1-bd15-f6a1549c9972,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da\"" May 14 18:14:25.590174 systemd-networkd[1518]: calif5f1ae1148e: Link UP May 14 18:14:25.590646 systemd-networkd[1518]: calif5f1ae1148e: Gained carrier May 14 18:14:25.604212 containerd[1587]: 2025-05-14 18:14:25.419 [INFO][4116] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--t68rd-eth0 coredns-6f6b679f8f- kube-system 95c6804d-85db-4594-9866-49d32499413f 671 0 2025-05-14 18:13:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-t68rd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif5f1ae1148e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" Namespace="kube-system" Pod="coredns-6f6b679f8f-t68rd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t68rd-" May 14 18:14:25.604212 containerd[1587]: 2025-05-14 18:14:25.419 [INFO][4116] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" Namespace="kube-system" Pod="coredns-6f6b679f8f-t68rd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t68rd-eth0" May 14 18:14:25.604212 containerd[1587]: 2025-05-14 18:14:25.449 [INFO][4140] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" HandleID="k8s-pod-network.cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" Workload="localhost-k8s-coredns--6f6b679f8f--t68rd-eth0" May 14 18:14:25.604446 containerd[1587]: 2025-05-14 18:14:25.455 [INFO][4140] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" HandleID="k8s-pod-network.cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" Workload="localhost-k8s-coredns--6f6b679f8f--t68rd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ad250), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-t68rd", "timestamp":"2025-05-14 18:14:25.449551825 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:14:25.604446 containerd[1587]: 2025-05-14 18:14:25.455 [INFO][4140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:14:25.604446 containerd[1587]: 2025-05-14 18:14:25.473 [INFO][4140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:14:25.604446 containerd[1587]: 2025-05-14 18:14:25.473 [INFO][4140] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:14:25.604446 containerd[1587]: 2025-05-14 18:14:25.557 [INFO][4140] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" host="localhost" May 14 18:14:25.604446 containerd[1587]: 2025-05-14 18:14:25.562 [INFO][4140] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:14:25.604446 containerd[1587]: 2025-05-14 18:14:25.567 [INFO][4140] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:14:25.604446 containerd[1587]: 2025-05-14 18:14:25.568 [INFO][4140] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:14:25.604446 containerd[1587]: 2025-05-14 18:14:25.570 [INFO][4140] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:14:25.604446 containerd[1587]: 2025-05-14 18:14:25.570 [INFO][4140] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" host="localhost" May 14 18:14:25.604745 containerd[1587]: 2025-05-14 18:14:25.572 [INFO][4140] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376 May 14 18:14:25.604745 containerd[1587]: 2025-05-14 18:14:25.577 [INFO][4140] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" host="localhost" May 14 18:14:25.604745 containerd[1587]: 2025-05-14 18:14:25.583 [INFO][4140] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" host="localhost" May 14 18:14:25.604745 containerd[1587]: 2025-05-14 18:14:25.583 [INFO][4140] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" host="localhost" May 14 18:14:25.604745 containerd[1587]: 2025-05-14 18:14:25.583 [INFO][4140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:14:25.604745 containerd[1587]: 2025-05-14 18:14:25.583 [INFO][4140] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" HandleID="k8s-pod-network.cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" Workload="localhost-k8s-coredns--6f6b679f8f--t68rd-eth0" May 14 18:14:25.604996 containerd[1587]: 2025-05-14 18:14:25.587 [INFO][4116] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" Namespace="kube-system" Pod="coredns-6f6b679f8f-t68rd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t68rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--t68rd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"95c6804d-85db-4594-9866-49d32499413f", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-t68rd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5f1ae1148e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:25.605106 containerd[1587]: 2025-05-14 18:14:25.587 [INFO][4116] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" Namespace="kube-system" Pod="coredns-6f6b679f8f-t68rd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t68rd-eth0" May 14 18:14:25.605106 containerd[1587]: 2025-05-14 18:14:25.587 [INFO][4116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5f1ae1148e ContainerID="cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" Namespace="kube-system" Pod="coredns-6f6b679f8f-t68rd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t68rd-eth0" May 14 18:14:25.605106 containerd[1587]: 2025-05-14 18:14:25.589 [INFO][4116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" Namespace="kube-system" Pod="coredns-6f6b679f8f-t68rd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t68rd-eth0" May 14 18:14:25.605212 containerd[1587]: 2025-05-14 18:14:25.590 [INFO][4116] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" Namespace="kube-system" Pod="coredns-6f6b679f8f-t68rd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t68rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--t68rd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"95c6804d-85db-4594-9866-49d32499413f", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376", Pod:"coredns-6f6b679f8f-t68rd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5f1ae1148e", MAC:"16:5e:19:d6:ce:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:25.605212 containerd[1587]: 2025-05-14 18:14:25.600 [INFO][4116] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" Namespace="kube-system" Pod="coredns-6f6b679f8f-t68rd" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--t68rd-eth0" May 14 18:14:25.631038 containerd[1587]: time="2025-05-14T18:14:25.630892072Z" level=info msg="connecting to shim cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376" address="unix:///run/containerd/s/eb4ccb1f15dc88718154ecc942e090982a4db529edb010141feea3290984fe3d" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:25.661231 systemd[1]: Started cri-containerd-cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376.scope - libcontainer container cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376. May 14 18:14:25.673184 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:14:25.713865 containerd[1587]: time="2025-05-14T18:14:25.713806235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-t68rd,Uid:95c6804d-85db-4594-9866-49d32499413f,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376\"" May 14 18:14:25.722750 containerd[1587]: time="2025-05-14T18:14:25.722649315Z" level=info msg="CreateContainer within sandbox \"cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:14:25.765837 containerd[1587]: time="2025-05-14T18:14:25.765658555Z" level=info msg="Container e9dbbbf649e31d63f81dee3023f2bb3d41085ae41485aab71b0332e5269c6df8: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:25.777530 containerd[1587]: time="2025-05-14T18:14:25.777364278Z" level=info msg="CreateContainer within sandbox \"cc35916ac68225cfe992060308c69ffa1c9f8fbfcef7cb72345c74a11faaa376\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e9dbbbf649e31d63f81dee3023f2bb3d41085ae41485aab71b0332e5269c6df8\"" May 14 18:14:25.779479 containerd[1587]: time="2025-05-14T18:14:25.778556420Z" level=info msg="StartContainer for \"e9dbbbf649e31d63f81dee3023f2bb3d41085ae41485aab71b0332e5269c6df8\"" May 14 18:14:25.781404 containerd[1587]: time="2025-05-14T18:14:25.780291252Z" level=info msg="connecting to shim e9dbbbf649e31d63f81dee3023f2bb3d41085ae41485aab71b0332e5269c6df8" address="unix:///run/containerd/s/eb4ccb1f15dc88718154ecc942e090982a4db529edb010141feea3290984fe3d" protocol=ttrpc version=3 May 14 18:14:25.784401 systemd-networkd[1518]: cali97984892c29: Gained IPv6LL May 14 18:14:25.803944 systemd[1]: Started cri-containerd-e9dbbbf649e31d63f81dee3023f2bb3d41085ae41485aab71b0332e5269c6df8.scope - libcontainer container e9dbbbf649e31d63f81dee3023f2bb3d41085ae41485aab71b0332e5269c6df8. May 14 18:14:25.944159 containerd[1587]: time="2025-05-14T18:14:25.944042333Z" level=info msg="StartContainer for \"e9dbbbf649e31d63f81dee3023f2bb3d41085ae41485aab71b0332e5269c6df8\" returns successfully" May 14 18:14:26.265577 containerd[1587]: time="2025-05-14T18:14:26.265453300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:26.266346 containerd[1587]: time="2025-05-14T18:14:26.266320310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 14 18:14:26.267524 containerd[1587]: time="2025-05-14T18:14:26.267460774Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:26.269239 containerd[1587]: time="2025-05-14T18:14:26.269200856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:26.269752 containerd[1587]: time="2025-05-14T18:14:26.269715424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.254857414s" May 14 18:14:26.269752 containerd[1587]: time="2025-05-14T18:14:26.269751351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 14 18:14:26.270656 containerd[1587]: time="2025-05-14T18:14:26.270623802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 18:14:26.277585 containerd[1587]: time="2025-05-14T18:14:26.277552239Z" level=info msg="CreateContainer within sandbox \"74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 18:14:26.286954 containerd[1587]: time="2025-05-14T18:14:26.286889476Z" level=info msg="Container b29d4b64341a3055af649e6faaba2f56d6d4aedd648acc6dc55cdf4495c540b5: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:26.294648 containerd[1587]: time="2025-05-14T18:14:26.294614902Z" level=info msg="CreateContainer within sandbox \"74ee9d33dc7a577c88ff6b472dfe7ad55205924887ed2d099349c17aa094bbbd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b29d4b64341a3055af649e6faaba2f56d6d4aedd648acc6dc55cdf4495c540b5\"" May 14 18:14:26.295205 containerd[1587]: time="2025-05-14T18:14:26.295158884Z" level=info msg="StartContainer for \"b29d4b64341a3055af649e6faaba2f56d6d4aedd648acc6dc55cdf4495c540b5\"" May 14 18:14:26.296481 containerd[1587]: time="2025-05-14T18:14:26.296453058Z" level=info msg="connecting to shim b29d4b64341a3055af649e6faaba2f56d6d4aedd648acc6dc55cdf4495c540b5" address="unix:///run/containerd/s/98174d91da2c9099adf7d3b1bd77287f2e0f3ea5f8aaae88342ae34a3976c92b" protocol=ttrpc version=3 May 14 18:14:26.319225 systemd[1]: Started cri-containerd-b29d4b64341a3055af649e6faaba2f56d6d4aedd648acc6dc55cdf4495c540b5.scope - libcontainer container b29d4b64341a3055af649e6faaba2f56d6d4aedd648acc6dc55cdf4495c540b5. May 14 18:14:26.372519 containerd[1587]: time="2025-05-14T18:14:26.372464831Z" level=info msg="StartContainer for \"b29d4b64341a3055af649e6faaba2f56d6d4aedd648acc6dc55cdf4495c540b5\" returns successfully" May 14 18:14:26.379188 containerd[1587]: time="2025-05-14T18:14:26.379073658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tfcbv,Uid:3271afb2-425a-4096-8efb-26949e13e024,Namespace:kube-system,Attempt:0,}" May 14 18:14:26.534036 kubelet[2710]: I0514 18:14:26.533972 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6689cfc69-jvqbr" podStartSLOduration=24.27783944 podStartE2EDuration="26.533957263s" podCreationTimestamp="2025-05-14 18:14:00 +0000 UTC" firstStartedPulling="2025-05-14 18:14:24.014439774 +0000 UTC m=+34.717533060" lastFinishedPulling="2025-05-14 18:14:26.270557597 +0000 UTC m=+36.973650883" observedRunningTime="2025-05-14 18:14:26.533582538 +0000 UTC m=+37.236675834" watchObservedRunningTime="2025-05-14 18:14:26.533957263 +0000 UTC m=+37.237050549" May 14 18:14:26.568724 containerd[1587]: time="2025-05-14T18:14:26.568368566Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b29d4b64341a3055af649e6faaba2f56d6d4aedd648acc6dc55cdf4495c540b5\" id:\"0508e6b1447711be382bd3257e0f99ac06d237b4a09b8c2105c6a929354392e6\" pid:4384 exit_status:1 exited_at:{seconds:1747246466 nanos:567068180}" May 14 18:14:26.620870 systemd-networkd[1518]: cali08ef276e292: Link UP May 14 18:14:26.621493 systemd-networkd[1518]: cali08ef276e292: Gained carrier May 14 18:14:26.632536 kubelet[2710]: I0514 18:14:26.632460 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-t68rd" podStartSLOduration=31.632433135 podStartE2EDuration="31.632433135s" podCreationTimestamp="2025-05-14 18:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:14:26.550796575 +0000 UTC m=+37.253889861" watchObservedRunningTime="2025-05-14 18:14:26.632433135 +0000 UTC m=+37.335526431" May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.472 [INFO][4359] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--tfcbv-eth0 coredns-6f6b679f8f- kube-system 3271afb2-425a-4096-8efb-26949e13e024 676 0 2025-05-14 18:13:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-tfcbv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali08ef276e292 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-tfcbv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tfcbv-" May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.472 [INFO][4359] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-tfcbv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tfcbv-eth0" May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.578 [INFO][4391] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" HandleID="k8s-pod-network.abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" Workload="localhost-k8s-coredns--6f6b679f8f--tfcbv-eth0" May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.587 [INFO][4391] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" HandleID="k8s-pod-network.abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" Workload="localhost-k8s-coredns--6f6b679f8f--tfcbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e2030), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-tfcbv", "timestamp":"2025-05-14 18:14:26.578708657 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.587 [INFO][4391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.587 [INFO][4391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.588 [INFO][4391] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.589 [INFO][4391] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" host="localhost" May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.593 [INFO][4391] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.597 [INFO][4391] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.599 [INFO][4391] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.602 [INFO][4391] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.602 [INFO][4391] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" host="localhost" May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.604 [INFO][4391] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.608 [INFO][4391] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" host="localhost" May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.613 [INFO][4391] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" host="localhost" May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.614 [INFO][4391] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" host="localhost" May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.614 [INFO][4391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:14:26.638240 containerd[1587]: 2025-05-14 18:14:26.614 [INFO][4391] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" HandleID="k8s-pod-network.abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" Workload="localhost-k8s-coredns--6f6b679f8f--tfcbv-eth0" May 14 18:14:26.638982 containerd[1587]: 2025-05-14 18:14:26.619 [INFO][4359] cni-plugin/k8s.go 386: Populated endpoint ContainerID="abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-tfcbv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tfcbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--tfcbv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3271afb2-425a-4096-8efb-26949e13e024", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-tfcbv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08ef276e292", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:26.638982 containerd[1587]: 2025-05-14 18:14:26.619 [INFO][4359] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-tfcbv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tfcbv-eth0" May 14 18:14:26.638982 containerd[1587]: 2025-05-14 18:14:26.619 [INFO][4359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08ef276e292 ContainerID="abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-tfcbv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tfcbv-eth0" May 14 18:14:26.638982 containerd[1587]: 2025-05-14 18:14:26.621 [INFO][4359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-tfcbv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tfcbv-eth0" May 14 18:14:26.638982 containerd[1587]: 2025-05-14 18:14:26.622 [INFO][4359] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-tfcbv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tfcbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--tfcbv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3271afb2-425a-4096-8efb-26949e13e024", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 13, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc", Pod:"coredns-6f6b679f8f-tfcbv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08ef276e292", MAC:"ea:05:64:9a:dc:ef", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:26.638982 containerd[1587]: 2025-05-14 18:14:26.632 [INFO][4359] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" Namespace="kube-system" Pod="coredns-6f6b679f8f-tfcbv" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--tfcbv-eth0" May 14 18:14:26.661558 containerd[1587]: time="2025-05-14T18:14:26.661508495Z" level=info msg="connecting to shim abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc" address="unix:///run/containerd/s/9447161fade87ea7d75bdaeb101510207a7b737b7cb90c28d447481be4665d4d" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:26.691293 systemd[1]: Started cri-containerd-abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc.scope - libcontainer container abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc. May 14 18:14:26.706558 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:14:26.740113 containerd[1587]: time="2025-05-14T18:14:26.740053902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tfcbv,Uid:3271afb2-425a-4096-8efb-26949e13e024,Namespace:kube-system,Attempt:0,} returns sandbox id \"abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc\"" May 14 18:14:26.742785 containerd[1587]: time="2025-05-14T18:14:26.742734662Z" level=info msg="CreateContainer within sandbox \"abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:14:26.753475 containerd[1587]: time="2025-05-14T18:14:26.753184371Z" level=info msg="Container 5085fc351ec72ddf3e9fcbc38c87317755dc398649173d2d94c48d8ad0467f0f: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:26.763729 containerd[1587]: time="2025-05-14T18:14:26.763678993Z" level=info msg="CreateContainer within sandbox \"abf593ceca9b2be20fcc4d75b770daad68c641ffc4a6b202884c252d1b3137cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5085fc351ec72ddf3e9fcbc38c87317755dc398649173d2d94c48d8ad0467f0f\"" May 14 18:14:26.764355 containerd[1587]: time="2025-05-14T18:14:26.764250378Z" level=info msg="StartContainer for \"5085fc351ec72ddf3e9fcbc38c87317755dc398649173d2d94c48d8ad0467f0f\"" May 14 18:14:26.765242 containerd[1587]: time="2025-05-14T18:14:26.765204903Z" level=info msg="connecting to shim 5085fc351ec72ddf3e9fcbc38c87317755dc398649173d2d94c48d8ad0467f0f" address="unix:///run/containerd/s/9447161fade87ea7d75bdaeb101510207a7b737b7cb90c28d447481be4665d4d" protocol=ttrpc version=3 May 14 18:14:26.795272 systemd[1]: Started cri-containerd-5085fc351ec72ddf3e9fcbc38c87317755dc398649173d2d94c48d8ad0467f0f.scope - libcontainer container 5085fc351ec72ddf3e9fcbc38c87317755dc398649173d2d94c48d8ad0467f0f. May 14 18:14:26.991432 containerd[1587]: time="2025-05-14T18:14:26.991370077Z" level=info msg="StartContainer for \"5085fc351ec72ddf3e9fcbc38c87317755dc398649173d2d94c48d8ad0467f0f\" returns successfully" May 14 18:14:27.000375 systemd-networkd[1518]: calie1907817da2: Gained IPv6LL May 14 18:14:27.379393 containerd[1587]: time="2025-05-14T18:14:27.379322254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6479cb6-z6rzb,Uid:89850186-46b0-42e5-9218-fe296a67b6e5,Namespace:calico-apiserver,Attempt:0,}" May 14 18:14:27.491997 systemd-networkd[1518]: calib42a26e9022: Link UP May 14 18:14:27.492269 systemd-networkd[1518]: calib42a26e9022: Gained carrier May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.415 [INFO][4511] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59f6479cb6--z6rzb-eth0 calico-apiserver-59f6479cb6- calico-apiserver 89850186-46b0-42e5-9218-fe296a67b6e5 677 0 2025-05-14 18:14:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59f6479cb6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59f6479cb6-z6rzb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib42a26e9022 [] []}} ContainerID="8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-z6rzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--z6rzb-" May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.415 [INFO][4511] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-z6rzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--z6rzb-eth0" May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.447 [INFO][4525] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" HandleID="k8s-pod-network.8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" Workload="localhost-k8s-calico--apiserver--59f6479cb6--z6rzb-eth0" May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.455 [INFO][4525] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" HandleID="k8s-pod-network.8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" Workload="localhost-k8s-calico--apiserver--59f6479cb6--z6rzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027ea50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59f6479cb6-z6rzb", "timestamp":"2025-05-14 18:14:27.44734537 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.455 [INFO][4525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.455 [INFO][4525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.455 [INFO][4525] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.458 [INFO][4525] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" host="localhost" May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.462 [INFO][4525] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.466 [INFO][4525] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.468 [INFO][4525] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.470 [INFO][4525] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.470 [INFO][4525] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" host="localhost" May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.472 [INFO][4525] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35 May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.478 [INFO][4525] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" host="localhost" May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.486 [INFO][4525] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" host="localhost" May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.486 [INFO][4525] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" host="localhost" May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.486 [INFO][4525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:14:27.506544 containerd[1587]: 2025-05-14 18:14:27.486 [INFO][4525] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" HandleID="k8s-pod-network.8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" Workload="localhost-k8s-calico--apiserver--59f6479cb6--z6rzb-eth0" May 14 18:14:27.507264 containerd[1587]: 2025-05-14 18:14:27.488 [INFO][4511] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-z6rzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--z6rzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f6479cb6--z6rzb-eth0", GenerateName:"calico-apiserver-59f6479cb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"89850186-46b0-42e5-9218-fe296a67b6e5", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f6479cb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59f6479cb6-z6rzb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib42a26e9022", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:27.507264 containerd[1587]: 2025-05-14 18:14:27.489 [INFO][4511] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-z6rzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--z6rzb-eth0" May 14 18:14:27.507264 containerd[1587]: 2025-05-14 18:14:27.489 [INFO][4511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib42a26e9022 ContainerID="8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-z6rzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--z6rzb-eth0" May 14 18:14:27.507264 containerd[1587]: 2025-05-14 18:14:27.491 [INFO][4511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-z6rzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--z6rzb-eth0" May 14 18:14:27.507264 containerd[1587]: 2025-05-14 18:14:27.491 [INFO][4511] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-z6rzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--z6rzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f6479cb6--z6rzb-eth0", GenerateName:"calico-apiserver-59f6479cb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"89850186-46b0-42e5-9218-fe296a67b6e5", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f6479cb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35", Pod:"calico-apiserver-59f6479cb6-z6rzb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib42a26e9022", MAC:"0e:fb:c3:73:ff:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:27.507264 containerd[1587]: 2025-05-14 18:14:27.502 [INFO][4511] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-z6rzb" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--z6rzb-eth0" May 14 18:14:27.513420 systemd-networkd[1518]: calif5f1ae1148e: Gained IPv6LL May 14 18:14:27.527965 kubelet[2710]: I0514 18:14:27.527799 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-tfcbv" podStartSLOduration=32.527778858 podStartE2EDuration="32.527778858s" podCreationTimestamp="2025-05-14 18:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:14:27.525480587 +0000 UTC m=+38.228573873" watchObservedRunningTime="2025-05-14 18:14:27.527778858 +0000 UTC m=+38.230872144" May 14 18:14:27.573772 containerd[1587]: time="2025-05-14T18:14:27.573727133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b29d4b64341a3055af649e6faaba2f56d6d4aedd648acc6dc55cdf4495c540b5\" id:\"f433d2c41ec62259d186e5391e6b25d7b594debbc62ae54d9b9687e8a20afc3f\" pid:4561 exited_at:{seconds:1747246467 nanos:573340656}" May 14 18:14:28.193202 containerd[1587]: time="2025-05-14T18:14:28.193153496Z" level=info msg="connecting to shim 8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35" address="unix:///run/containerd/s/0f48ab80e3bf571b5ba57760e82528ac2126355d4be2f7dd12302cacefe67019" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:28.220277 systemd[1]: Started cri-containerd-8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35.scope - libcontainer container 8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35. May 14 18:14:28.233050 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:14:28.265432 containerd[1587]: time="2025-05-14T18:14:28.265364879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6479cb6-z6rzb,Uid:89850186-46b0-42e5-9218-fe296a67b6e5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35\"" May 14 18:14:28.379003 containerd[1587]: time="2025-05-14T18:14:28.378956882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6479cb6-qfqb6,Uid:116be02b-0fa5-4611-968b-497737b4096b,Namespace:calico-apiserver,Attempt:0,}" May 14 18:14:28.472322 systemd-networkd[1518]: cali08ef276e292: Gained IPv6LL May 14 18:14:28.659012 systemd-networkd[1518]: cali73a45e09f35: Link UP May 14 18:14:28.659439 systemd-networkd[1518]: cali73a45e09f35: Gained carrier May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.532 [INFO][4624] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59f6479cb6--qfqb6-eth0 calico-apiserver-59f6479cb6- calico-apiserver 116be02b-0fa5-4611-968b-497737b4096b 674 0 2025-05-14 18:14:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59f6479cb6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59f6479cb6-qfqb6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali73a45e09f35 [] []}} ContainerID="e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-qfqb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--qfqb6-" May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.532 [INFO][4624] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-qfqb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--qfqb6-eth0" May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.560 [INFO][4640] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" HandleID="k8s-pod-network.e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" Workload="localhost-k8s-calico--apiserver--59f6479cb6--qfqb6-eth0" May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.568 [INFO][4640] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" HandleID="k8s-pod-network.e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" Workload="localhost-k8s-calico--apiserver--59f6479cb6--qfqb6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004f1390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59f6479cb6-qfqb6", "timestamp":"2025-05-14 18:14:28.560771468 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.568 [INFO][4640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.568 [INFO][4640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.568 [INFO][4640] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.570 [INFO][4640] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" host="localhost" May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.573 [INFO][4640] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.576 [INFO][4640] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.577 [INFO][4640] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.579 [INFO][4640] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.579 [INFO][4640] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" host="localhost" May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.580 [INFO][4640] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.596 [INFO][4640] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" host="localhost" May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.654 [INFO][4640] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" host="localhost" May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.654 [INFO][4640] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" host="localhost" May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.654 [INFO][4640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:14:28.773641 containerd[1587]: 2025-05-14 18:14:28.654 [INFO][4640] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" HandleID="k8s-pod-network.e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" Workload="localhost-k8s-calico--apiserver--59f6479cb6--qfqb6-eth0" May 14 18:14:28.774507 containerd[1587]: 2025-05-14 18:14:28.657 [INFO][4624] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-qfqb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--qfqb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f6479cb6--qfqb6-eth0", GenerateName:"calico-apiserver-59f6479cb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"116be02b-0fa5-4611-968b-497737b4096b", ResourceVersion:"674", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f6479cb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59f6479cb6-qfqb6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73a45e09f35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:28.774507 containerd[1587]: 2025-05-14 18:14:28.657 [INFO][4624] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-qfqb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--qfqb6-eth0" May 14 18:14:28.774507 containerd[1587]: 2025-05-14 18:14:28.657 [INFO][4624] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73a45e09f35 ContainerID="e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-qfqb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--qfqb6-eth0" May 14 18:14:28.774507 containerd[1587]: 2025-05-14 18:14:28.659 [INFO][4624] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-qfqb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--qfqb6-eth0" May 14 18:14:28.774507 containerd[1587]: 2025-05-14 18:14:28.659 [INFO][4624] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-qfqb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--qfqb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f6479cb6--qfqb6-eth0", GenerateName:"calico-apiserver-59f6479cb6-", Namespace:"calico-apiserver", SelfLink:"", UID:"116be02b-0fa5-4611-968b-497737b4096b", ResourceVersion:"674", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 18, 14, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f6479cb6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d", Pod:"calico-apiserver-59f6479cb6-qfqb6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73a45e09f35", MAC:"22:11:c6:0e:c7:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:14:28.774507 containerd[1587]: 2025-05-14 18:14:28.770 [INFO][4624] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" Namespace="calico-apiserver" Pod="calico-apiserver-59f6479cb6-qfqb6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f6479cb6--qfqb6-eth0" May 14 18:14:28.792267 systemd-networkd[1518]: calib42a26e9022: Gained IPv6LL May 14 18:14:28.926179 containerd[1587]: time="2025-05-14T18:14:28.926112452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:28.948701 containerd[1587]: time="2025-05-14T18:14:28.948631754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 14 18:14:28.981996 containerd[1587]: time="2025-05-14T18:14:28.981918337Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:29.215470 containerd[1587]: time="2025-05-14T18:14:29.215407087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:29.216336 containerd[1587]: time="2025-05-14T18:14:29.216287202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.945624606s" May 14 18:14:29.216388 containerd[1587]: time="2025-05-14T18:14:29.216337696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 14 18:14:29.217939 containerd[1587]: time="2025-05-14T18:14:29.217716207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 18:14:29.219355 containerd[1587]: time="2025-05-14T18:14:29.218615427Z" level=info msg="CreateContainer within sandbox \"2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 18:14:29.227252 containerd[1587]: time="2025-05-14T18:14:29.227206785Z" level=info msg="connecting to shim e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d" address="unix:///run/containerd/s/9a1eec5634e3a2f95818f170d88038151daf45d4fd4f62b8a2824a92da88cd95" namespace=k8s.io protocol=ttrpc version=3 May 14 18:14:29.242850 containerd[1587]: time="2025-05-14T18:14:29.242501023Z" level=info msg="Container c5c2f36b7cb70b471805d62ffff355c3fbc9ff8b481834bfa36826ef74072a55: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:29.267354 systemd[1]: Started cri-containerd-e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d.scope - libcontainer container e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d. May 14 18:14:29.275872 containerd[1587]: time="2025-05-14T18:14:29.275818136Z" level=info msg="CreateContainer within sandbox \"2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c5c2f36b7cb70b471805d62ffff355c3fbc9ff8b481834bfa36826ef74072a55\"" May 14 18:14:29.277261 containerd[1587]: time="2025-05-14T18:14:29.277223076Z" level=info msg="StartContainer for \"c5c2f36b7cb70b471805d62ffff355c3fbc9ff8b481834bfa36826ef74072a55\"" May 14 18:14:29.278660 containerd[1587]: time="2025-05-14T18:14:29.278586139Z" level=info msg="connecting to shim c5c2f36b7cb70b471805d62ffff355c3fbc9ff8b481834bfa36826ef74072a55" address="unix:///run/containerd/s/2715b5a81ea0c8bcceab0757af0a9199f87d5a02d656e843392c1de57c7e715d" protocol=ttrpc version=3 May 14 18:14:29.287776 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 18:14:29.311399 systemd[1]: Started cri-containerd-c5c2f36b7cb70b471805d62ffff355c3fbc9ff8b481834bfa36826ef74072a55.scope - libcontainer container c5c2f36b7cb70b471805d62ffff355c3fbc9ff8b481834bfa36826ef74072a55. May 14 18:14:29.390359 containerd[1587]: time="2025-05-14T18:14:29.390311929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f6479cb6-qfqb6,Uid:116be02b-0fa5-4611-968b-497737b4096b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d\"" May 14 18:14:29.391524 containerd[1587]: time="2025-05-14T18:14:29.391478171Z" level=info msg="StartContainer for \"c5c2f36b7cb70b471805d62ffff355c3fbc9ff8b481834bfa36826ef74072a55\" returns successfully" May 14 18:14:29.688307 systemd-networkd[1518]: cali73a45e09f35: Gained IPv6LL May 14 18:14:30.385367 systemd[1]: Started sshd@9-10.0.0.145:22-10.0.0.1:40262.service - OpenSSH per-connection server daemon (10.0.0.1:40262). May 14 18:14:30.440279 sshd[4744]: Accepted publickey for core from 10.0.0.1 port 40262 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:14:30.442341 sshd-session[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:30.446725 systemd-logind[1574]: New session 10 of user core. May 14 18:14:30.455336 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 18:14:30.609193 sshd[4746]: Connection closed by 10.0.0.1 port 40262 May 14 18:14:30.609557 sshd-session[4744]: pam_unix(sshd:session): session closed for user core May 14 18:14:30.621894 systemd[1]: sshd@9-10.0.0.145:22-10.0.0.1:40262.service: Deactivated successfully. May 14 18:14:30.624079 systemd[1]: session-10.scope: Deactivated successfully. May 14 18:14:30.625008 systemd-logind[1574]: Session 10 logged out. Waiting for processes to exit. May 14 18:14:30.628071 systemd[1]: Started sshd@10-10.0.0.145:22-10.0.0.1:40272.service - OpenSSH per-connection server daemon (10.0.0.1:40272). May 14 18:14:30.629186 systemd-logind[1574]: Removed session 10. May 14 18:14:30.675734 sshd[4761]: Accepted publickey for core from 10.0.0.1 port 40272 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:14:30.677216 sshd-session[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:30.681457 systemd-logind[1574]: New session 11 of user core. May 14 18:14:30.691229 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 18:14:31.007097 sshd[4763]: Connection closed by 10.0.0.1 port 40272 May 14 18:14:31.007296 sshd-session[4761]: pam_unix(sshd:session): session closed for user core May 14 18:14:31.016660 systemd[1]: sshd@10-10.0.0.145:22-10.0.0.1:40272.service: Deactivated successfully. May 14 18:14:31.019327 systemd[1]: session-11.scope: Deactivated successfully. May 14 18:14:31.020342 systemd-logind[1574]: Session 11 logged out. Waiting for processes to exit. May 14 18:14:31.024484 systemd[1]: Started sshd@11-10.0.0.145:22-10.0.0.1:40274.service - OpenSSH per-connection server daemon (10.0.0.1:40274). May 14 18:14:31.025435 systemd-logind[1574]: Removed session 11. May 14 18:14:31.068447 sshd[4774]: Accepted publickey for core from 10.0.0.1 port 40274 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:14:31.070581 sshd-session[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:31.075927 systemd-logind[1574]: New session 12 of user core. May 14 18:14:31.084373 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 18:14:31.213999 sshd[4776]: Connection closed by 10.0.0.1 port 40274 May 14 18:14:31.214246 sshd-session[4774]: pam_unix(sshd:session): session closed for user core May 14 18:14:31.220172 systemd-logind[1574]: Session 12 logged out. Waiting for processes to exit. May 14 18:14:31.220990 systemd[1]: sshd@11-10.0.0.145:22-10.0.0.1:40274.service: Deactivated successfully. May 14 18:14:31.223320 systemd[1]: session-12.scope: Deactivated successfully. May 14 18:14:31.228005 systemd-logind[1574]: Removed session 12. May 14 18:14:31.918951 containerd[1587]: time="2025-05-14T18:14:31.918877695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:31.919910 containerd[1587]: time="2025-05-14T18:14:31.919860091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 14 18:14:31.921622 containerd[1587]: time="2025-05-14T18:14:31.921584130Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:31.925072 containerd[1587]: time="2025-05-14T18:14:31.924157256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:31.925072 containerd[1587]: time="2025-05-14T18:14:31.924847193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.707097021s" May 14 18:14:31.925072 containerd[1587]: time="2025-05-14T18:14:31.924883882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 18:14:31.926295 containerd[1587]: time="2025-05-14T18:14:31.926062066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 18:14:31.927725 containerd[1587]: time="2025-05-14T18:14:31.927696967Z" level=info msg="CreateContainer within sandbox \"8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 18:14:31.936903 containerd[1587]: time="2025-05-14T18:14:31.936857232Z" level=info msg="Container 78c1931a1e67b856edc1126bc1197fb7e7fd1020e9f59ccc1e4d02bda045f492: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:31.943702 containerd[1587]: time="2025-05-14T18:14:31.943644547Z" level=info msg="CreateContainer within sandbox \"8b3055e5b45b8657fe45f0f2161b1e04e0b189989d2dc0c5c17c5936d5f25b35\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"78c1931a1e67b856edc1126bc1197fb7e7fd1020e9f59ccc1e4d02bda045f492\"" May 14 18:14:31.944151 containerd[1587]: time="2025-05-14T18:14:31.944121062Z" level=info msg="StartContainer for \"78c1931a1e67b856edc1126bc1197fb7e7fd1020e9f59ccc1e4d02bda045f492\"" May 14 18:14:31.945163 containerd[1587]: time="2025-05-14T18:14:31.945137402Z" level=info msg="connecting to shim 78c1931a1e67b856edc1126bc1197fb7e7fd1020e9f59ccc1e4d02bda045f492" address="unix:///run/containerd/s/0f48ab80e3bf571b5ba57760e82528ac2126355d4be2f7dd12302cacefe67019" protocol=ttrpc version=3 May 14 18:14:31.977348 systemd[1]: Started cri-containerd-78c1931a1e67b856edc1126bc1197fb7e7fd1020e9f59ccc1e4d02bda045f492.scope - libcontainer container 78c1931a1e67b856edc1126bc1197fb7e7fd1020e9f59ccc1e4d02bda045f492. May 14 18:14:32.035081 containerd[1587]: time="2025-05-14T18:14:32.035031096Z" level=info msg="StartContainer for \"78c1931a1e67b856edc1126bc1197fb7e7fd1020e9f59ccc1e4d02bda045f492\" returns successfully" May 14 18:14:32.481406 containerd[1587]: time="2025-05-14T18:14:32.481321445Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:32.482289 containerd[1587]: time="2025-05-14T18:14:32.482216186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 18:14:32.484437 containerd[1587]: time="2025-05-14T18:14:32.484390282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 558.280797ms" May 14 18:14:32.484437 containerd[1587]: time="2025-05-14T18:14:32.484423875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 18:14:32.485406 containerd[1587]: time="2025-05-14T18:14:32.485388176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 18:14:32.487077 containerd[1587]: time="2025-05-14T18:14:32.486498903Z" level=info msg="CreateContainer within sandbox \"e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 18:14:32.498105 containerd[1587]: time="2025-05-14T18:14:32.496044039Z" level=info msg="Container a47df750b88b80b04bfdbd965fa3c16407d92a9cc719cbbd8a22accb4d2de43c: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:32.506610 containerd[1587]: time="2025-05-14T18:14:32.506561442Z" level=info msg="CreateContainer within sandbox \"e5d10db0e9278300846447a0b25ec229ad2e8ff2ae1a0cb72899a373b342fb4d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a47df750b88b80b04bfdbd965fa3c16407d92a9cc719cbbd8a22accb4d2de43c\"" May 14 18:14:32.508214 containerd[1587]: time="2025-05-14T18:14:32.507301262Z" level=info msg="StartContainer for \"a47df750b88b80b04bfdbd965fa3c16407d92a9cc719cbbd8a22accb4d2de43c\"" May 14 18:14:32.509014 containerd[1587]: time="2025-05-14T18:14:32.508846416Z" level=info msg="connecting to shim a47df750b88b80b04bfdbd965fa3c16407d92a9cc719cbbd8a22accb4d2de43c" address="unix:///run/containerd/s/9a1eec5634e3a2f95818f170d88038151daf45d4fd4f62b8a2824a92da88cd95" protocol=ttrpc version=3 May 14 18:14:32.532464 systemd[1]: Started cri-containerd-a47df750b88b80b04bfdbd965fa3c16407d92a9cc719cbbd8a22accb4d2de43c.scope - libcontainer container a47df750b88b80b04bfdbd965fa3c16407d92a9cc719cbbd8a22accb4d2de43c. May 14 18:14:32.551397 kubelet[2710]: I0514 18:14:32.550831 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59f6479cb6-z6rzb" podStartSLOduration=28.891553871 podStartE2EDuration="32.550803335s" podCreationTimestamp="2025-05-14 18:14:00 +0000 UTC" firstStartedPulling="2025-05-14 18:14:28.266590494 +0000 UTC m=+38.969683780" lastFinishedPulling="2025-05-14 18:14:31.925839948 +0000 UTC m=+42.628933244" observedRunningTime="2025-05-14 18:14:32.55055609 +0000 UTC m=+43.253649376" watchObservedRunningTime="2025-05-14 18:14:32.550803335 +0000 UTC m=+43.253896621" May 14 18:14:32.608265 containerd[1587]: time="2025-05-14T18:14:32.608214212Z" level=info msg="StartContainer for \"a47df750b88b80b04bfdbd965fa3c16407d92a9cc719cbbd8a22accb4d2de43c\" returns successfully" May 14 18:14:33.542217 kubelet[2710]: I0514 18:14:33.542174 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:14:33.552015 kubelet[2710]: I0514 18:14:33.551944 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59f6479cb6-qfqb6" podStartSLOduration=30.458172576 podStartE2EDuration="33.551909787s" podCreationTimestamp="2025-05-14 18:14:00 +0000 UTC" firstStartedPulling="2025-05-14 18:14:29.39147247 +0000 UTC m=+40.094565746" lastFinishedPulling="2025-05-14 18:14:32.485209671 +0000 UTC m=+43.188302957" observedRunningTime="2025-05-14 18:14:33.551371185 +0000 UTC m=+44.254464461" watchObservedRunningTime="2025-05-14 18:14:33.551909787 +0000 UTC m=+44.255003074" May 14 18:14:34.418370 containerd[1587]: time="2025-05-14T18:14:34.418314824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:34.419015 containerd[1587]: time="2025-05-14T18:14:34.418986315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 14 18:14:34.420043 containerd[1587]: time="2025-05-14T18:14:34.420005701Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:34.421798 containerd[1587]: time="2025-05-14T18:14:34.421768633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:14:34.422471 containerd[1587]: time="2025-05-14T18:14:34.422432620Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.93691512s" May 14 18:14:34.422471 containerd[1587]: time="2025-05-14T18:14:34.422459461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 14 18:14:34.424122 containerd[1587]: time="2025-05-14T18:14:34.424082079Z" level=info msg="CreateContainer within sandbox \"2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 18:14:34.435562 containerd[1587]: time="2025-05-14T18:14:34.435530528Z" level=info msg="Container 5efab1467de36ff31a168b9100b89523740910ab80b706cd639c98e5488e47ed: CDI devices from CRI Config.CDIDevices: []" May 14 18:14:34.445608 containerd[1587]: time="2025-05-14T18:14:34.445571643Z" level=info msg="CreateContainer within sandbox \"2b6b55218a9f41ee3378d3037cc9c5230d8898eb56db9b93cda93052f42ec0da\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5efab1467de36ff31a168b9100b89523740910ab80b706cd639c98e5488e47ed\"" May 14 18:14:34.445983 containerd[1587]: time="2025-05-14T18:14:34.445954412Z" level=info msg="StartContainer for \"5efab1467de36ff31a168b9100b89523740910ab80b706cd639c98e5488e47ed\"" May 14 18:14:34.447261 containerd[1587]: time="2025-05-14T18:14:34.447234447Z" level=info msg="connecting to shim 5efab1467de36ff31a168b9100b89523740910ab80b706cd639c98e5488e47ed" address="unix:///run/containerd/s/2715b5a81ea0c8bcceab0757af0a9199f87d5a02d656e843392c1de57c7e715d" protocol=ttrpc version=3 May 14 18:14:34.487216 systemd[1]: Started cri-containerd-5efab1467de36ff31a168b9100b89523740910ab80b706cd639c98e5488e47ed.scope - libcontainer container 5efab1467de36ff31a168b9100b89523740910ab80b706cd639c98e5488e47ed. May 14 18:14:34.528583 containerd[1587]: time="2025-05-14T18:14:34.528527695Z" level=info msg="StartContainer for \"5efab1467de36ff31a168b9100b89523740910ab80b706cd639c98e5488e47ed\" returns successfully" May 14 18:14:34.547468 kubelet[2710]: I0514 18:14:34.547390 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:14:35.451984 kubelet[2710]: I0514 18:14:35.451951 2710 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 18:14:35.451984 kubelet[2710]: I0514 18:14:35.451986 2710 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 18:14:36.227205 systemd[1]: Started sshd@12-10.0.0.145:22-10.0.0.1:40310.service - OpenSSH per-connection server daemon (10.0.0.1:40310). May 14 18:14:36.291734 sshd[4912]: Accepted publickey for core from 10.0.0.1 port 40310 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:14:36.293995 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:36.299159 systemd-logind[1574]: New session 13 of user core. May 14 18:14:36.319394 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 18:14:36.448454 sshd[4914]: Connection closed by 10.0.0.1 port 40310 May 14 18:14:36.448809 sshd-session[4912]: pam_unix(sshd:session): session closed for user core May 14 18:14:36.454395 systemd[1]: sshd@12-10.0.0.145:22-10.0.0.1:40310.service: Deactivated successfully. May 14 18:14:36.456475 systemd[1]: session-13.scope: Deactivated successfully. May 14 18:14:36.457785 systemd-logind[1574]: Session 13 logged out. Waiting for processes to exit. May 14 18:14:36.462230 systemd-logind[1574]: Removed session 13. May 14 18:14:41.467047 systemd[1]: Started sshd@13-10.0.0.145:22-10.0.0.1:47014.service - OpenSSH per-connection server daemon (10.0.0.1:47014). May 14 18:14:41.519385 sshd[4935]: Accepted publickey for core from 10.0.0.1 port 47014 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:14:41.520786 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:41.525192 systemd-logind[1574]: New session 14 of user core. May 14 18:14:41.534232 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 18:14:41.648225 sshd[4937]: Connection closed by 10.0.0.1 port 47014 May 14 18:14:41.648538 sshd-session[4935]: pam_unix(sshd:session): session closed for user core May 14 18:14:41.652780 systemd[1]: sshd@13-10.0.0.145:22-10.0.0.1:47014.service: Deactivated successfully. May 14 18:14:41.654846 systemd[1]: session-14.scope: Deactivated successfully. May 14 18:14:41.655585 systemd-logind[1574]: Session 14 logged out. Waiting for processes to exit. May 14 18:14:41.656898 systemd-logind[1574]: Removed session 14. May 14 18:14:44.277027 kubelet[2710]: I0514 18:14:44.276964 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:14:44.302295 kubelet[2710]: I0514 18:14:44.301964 2710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-l7b4g" podStartSLOduration=35.466557094 podStartE2EDuration="44.301943715s" podCreationTimestamp="2025-05-14 18:14:00 +0000 UTC" firstStartedPulling="2025-05-14 18:14:25.587715958 +0000 UTC m=+36.290809244" lastFinishedPulling="2025-05-14 18:14:34.423102579 +0000 UTC m=+45.126195865" observedRunningTime="2025-05-14 18:14:34.561865227 +0000 UTC m=+45.264958523" watchObservedRunningTime="2025-05-14 18:14:44.301943715 +0000 UTC m=+55.005037032" May 14 18:14:46.661110 systemd[1]: Started sshd@14-10.0.0.145:22-10.0.0.1:49624.service - OpenSSH per-connection server daemon (10.0.0.1:49624). May 14 18:14:46.710252 sshd[4960]: Accepted publickey for core from 10.0.0.1 port 49624 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:14:46.711896 sshd-session[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:46.716325 systemd-logind[1574]: New session 15 of user core. May 14 18:14:46.724270 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 18:14:46.833490 sshd[4962]: Connection closed by 10.0.0.1 port 49624 May 14 18:14:46.833787 sshd-session[4960]: pam_unix(sshd:session): session closed for user core May 14 18:14:46.838547 systemd[1]: sshd@14-10.0.0.145:22-10.0.0.1:49624.service: Deactivated successfully. May 14 18:14:46.840760 systemd[1]: session-15.scope: Deactivated successfully. May 14 18:14:46.841778 systemd-logind[1574]: Session 15 logged out. Waiting for processes to exit. May 14 18:14:46.843108 systemd-logind[1574]: Removed session 15. May 14 18:14:47.031465 kubelet[2710]: I0514 18:14:47.031429 2710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:14:51.846055 systemd[1]: Started sshd@15-10.0.0.145:22-10.0.0.1:49628.service - OpenSSH per-connection server daemon (10.0.0.1:49628). May 14 18:14:51.902389 sshd[4981]: Accepted publickey for core from 10.0.0.1 port 49628 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:14:51.904071 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:51.909623 systemd-logind[1574]: New session 16 of user core. May 14 18:14:51.922415 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 18:14:52.053615 sshd[4983]: Connection closed by 10.0.0.1 port 49628 May 14 18:14:52.054147 sshd-session[4981]: pam_unix(sshd:session): session closed for user core May 14 18:14:52.064744 systemd[1]: sshd@15-10.0.0.145:22-10.0.0.1:49628.service: Deactivated successfully. May 14 18:14:52.067141 systemd[1]: session-16.scope: Deactivated successfully. May 14 18:14:52.068052 systemd-logind[1574]: Session 16 logged out. Waiting for processes to exit. May 14 18:14:52.072025 systemd[1]: Started sshd@16-10.0.0.145:22-10.0.0.1:49640.service - OpenSSH per-connection server daemon (10.0.0.1:49640). May 14 18:14:52.072748 systemd-logind[1574]: Removed session 16. May 14 18:14:52.127065 sshd[4996]: Accepted publickey for core from 10.0.0.1 port 49640 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:14:52.129056 sshd-session[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:52.134174 systemd-logind[1574]: New session 17 of user core. May 14 18:14:52.136077 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 18:14:52.174669 containerd[1587]: time="2025-05-14T18:14:52.174618555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1259399140a00d9ba01f0670c2310a01cbc5d52ce41bb5c927ef582af33914ce\" id:\"326ead637fae473d5cb712ef9bcef0efabcd5c0700a61624ca56757e4c5789c4\" pid:5010 exited_at:{seconds:1747246492 nanos:174269158}" May 14 18:14:52.371758 sshd[5016]: Connection closed by 10.0.0.1 port 49640 May 14 18:14:52.372241 sshd-session[4996]: pam_unix(sshd:session): session closed for user core May 14 18:14:52.383048 systemd[1]: sshd@16-10.0.0.145:22-10.0.0.1:49640.service: Deactivated successfully. May 14 18:14:52.385709 systemd[1]: session-17.scope: Deactivated successfully. May 14 18:14:52.386807 systemd-logind[1574]: Session 17 logged out. Waiting for processes to exit. May 14 18:14:52.391813 systemd[1]: Started sshd@17-10.0.0.145:22-10.0.0.1:49642.service - OpenSSH per-connection server daemon (10.0.0.1:49642). May 14 18:14:52.392591 systemd-logind[1574]: Removed session 17. May 14 18:14:52.442434 sshd[5035]: Accepted publickey for core from 10.0.0.1 port 49642 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:14:52.444103 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:52.449123 systemd-logind[1574]: New session 18 of user core. May 14 18:14:52.457255 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 18:14:54.254405 sshd[5037]: Connection closed by 10.0.0.1 port 49642 May 14 18:14:54.254799 sshd-session[5035]: pam_unix(sshd:session): session closed for user core May 14 18:14:54.265990 systemd[1]: sshd@17-10.0.0.145:22-10.0.0.1:49642.service: Deactivated successfully. May 14 18:14:54.267828 systemd[1]: session-18.scope: Deactivated successfully. May 14 18:14:54.268053 systemd[1]: session-18.scope: Consumed 624ms CPU time, 67.6M memory peak. May 14 18:14:54.271146 systemd-logind[1574]: Session 18 logged out. Waiting for processes to exit. May 14 18:14:54.280027 systemd[1]: Started sshd@18-10.0.0.145:22-10.0.0.1:49646.service - OpenSSH per-connection server daemon (10.0.0.1:49646). May 14 18:14:54.282841 systemd-logind[1574]: Removed session 18. May 14 18:14:54.365149 sshd[5057]: Accepted publickey for core from 10.0.0.1 port 49646 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:14:54.366990 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:54.372564 systemd-logind[1574]: New session 19 of user core. May 14 18:14:54.386385 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 18:14:54.506620 containerd[1587]: time="2025-05-14T18:14:54.506515479Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b29d4b64341a3055af649e6faaba2f56d6d4aedd648acc6dc55cdf4495c540b5\" id:\"6bb88771a9098ec88c150c7335a1e8bef120bfe231c1aa448bd203a14c4c5aff\" pid:5077 exited_at:{seconds:1747246494 nanos:506282700}" May 14 18:14:54.636994 sshd[5059]: Connection closed by 10.0.0.1 port 49646 May 14 18:14:54.637484 sshd-session[5057]: pam_unix(sshd:session): session closed for user core May 14 18:14:54.646078 systemd[1]: sshd@18-10.0.0.145:22-10.0.0.1:49646.service: Deactivated successfully. May 14 18:14:54.648489 systemd[1]: session-19.scope: Deactivated successfully. May 14 18:14:54.649590 systemd-logind[1574]: Session 19 logged out. Waiting for processes to exit. May 14 18:14:54.652778 systemd[1]: Started sshd@19-10.0.0.145:22-10.0.0.1:49662.service - OpenSSH per-connection server daemon (10.0.0.1:49662). May 14 18:14:54.653545 systemd-logind[1574]: Removed session 19. May 14 18:14:54.710356 sshd[5092]: Accepted publickey for core from 10.0.0.1 port 49662 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:14:54.712110 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:54.716723 systemd-logind[1574]: New session 20 of user core. May 14 18:14:54.725228 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 18:14:54.837350 sshd[5094]: Connection closed by 10.0.0.1 port 49662 May 14 18:14:54.837742 sshd-session[5092]: pam_unix(sshd:session): session closed for user core May 14 18:14:54.842491 systemd[1]: sshd@19-10.0.0.145:22-10.0.0.1:49662.service: Deactivated successfully. May 14 18:14:54.844354 systemd[1]: session-20.scope: Deactivated successfully. May 14 18:14:54.845225 systemd-logind[1574]: Session 20 logged out. Waiting for processes to exit. May 14 18:14:54.846394 systemd-logind[1574]: Removed session 20. May 14 18:14:55.630975 containerd[1587]: time="2025-05-14T18:14:55.630866905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b29d4b64341a3055af649e6faaba2f56d6d4aedd648acc6dc55cdf4495c540b5\" id:\"f05b4cdd2bda17a95bade0f669178a19db70efa7f250644b5f54b658c55ae507\" pid:5120 exited_at:{seconds:1747246495 nanos:630685657}" May 14 18:14:59.851048 systemd[1]: Started sshd@20-10.0.0.145:22-10.0.0.1:44596.service - OpenSSH per-connection server daemon (10.0.0.1:44596). May 14 18:14:59.904159 sshd[5133]: Accepted publickey for core from 10.0.0.1 port 44596 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:14:59.906763 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:14:59.915117 systemd-logind[1574]: New session 21 of user core. May 14 18:14:59.922352 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 18:15:00.036535 sshd[5137]: Connection closed by 10.0.0.1 port 44596 May 14 18:15:00.036867 sshd-session[5133]: pam_unix(sshd:session): session closed for user core May 14 18:15:00.041575 systemd[1]: sshd@20-10.0.0.145:22-10.0.0.1:44596.service: Deactivated successfully. May 14 18:15:00.043851 systemd[1]: session-21.scope: Deactivated successfully. May 14 18:15:00.044874 systemd-logind[1574]: Session 21 logged out. Waiting for processes to exit. May 14 18:15:00.046561 systemd-logind[1574]: Removed session 21. May 14 18:15:05.060588 systemd[1]: Started sshd@21-10.0.0.145:22-10.0.0.1:44602.service - OpenSSH per-connection server daemon (10.0.0.1:44602). May 14 18:15:05.108215 sshd[5157]: Accepted publickey for core from 10.0.0.1 port 44602 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:15:05.109594 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:05.113452 systemd-logind[1574]: New session 22 of user core. May 14 18:15:05.123222 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 18:15:05.235962 sshd[5159]: Connection closed by 10.0.0.1 port 44602 May 14 18:15:05.236235 sshd-session[5157]: pam_unix(sshd:session): session closed for user core May 14 18:15:05.240009 systemd[1]: sshd@21-10.0.0.145:22-10.0.0.1:44602.service: Deactivated successfully. May 14 18:15:05.241828 systemd[1]: session-22.scope: Deactivated successfully. May 14 18:15:05.242648 systemd-logind[1574]: Session 22 logged out. Waiting for processes to exit. May 14 18:15:05.243707 systemd-logind[1574]: Removed session 22. May 14 18:15:10.249978 systemd[1]: Started sshd@22-10.0.0.145:22-10.0.0.1:49946.service - OpenSSH per-connection server daemon (10.0.0.1:49946). May 14 18:15:10.293548 sshd[5174]: Accepted publickey for core from 10.0.0.1 port 49946 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:15:10.294950 sshd-session[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:10.299708 systemd-logind[1574]: New session 23 of user core. May 14 18:15:10.305253 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 18:15:10.420210 sshd[5176]: Connection closed by 10.0.0.1 port 49946 May 14 18:15:10.420434 sshd-session[5174]: pam_unix(sshd:session): session closed for user core May 14 18:15:10.425057 systemd[1]: sshd@22-10.0.0.145:22-10.0.0.1:49946.service: Deactivated successfully. May 14 18:15:10.427296 systemd[1]: session-23.scope: Deactivated successfully. May 14 18:15:10.428232 systemd-logind[1574]: Session 23 logged out. Waiting for processes to exit. May 14 18:15:10.429772 systemd-logind[1574]: Removed session 23. May 14 18:15:15.436199 systemd[1]: Started sshd@23-10.0.0.145:22-10.0.0.1:49950.service - OpenSSH per-connection server daemon (10.0.0.1:49950). May 14 18:15:15.473383 sshd[5191]: Accepted publickey for core from 10.0.0.1 port 49950 ssh2: RSA SHA256:AQRdwKZnQU0/9TofE96iRt4qC1i2gX6nnZ/OI0eW5lM May 14 18:15:15.474808 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:15:15.478868 systemd-logind[1574]: New session 24 of user core. May 14 18:15:15.485222 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 18:15:15.588643 sshd[5193]: Connection closed by 10.0.0.1 port 49950 May 14 18:15:15.588951 sshd-session[5191]: pam_unix(sshd:session): session closed for user core May 14 18:15:15.593273 systemd[1]: sshd@23-10.0.0.145:22-10.0.0.1:49950.service: Deactivated successfully. May 14 18:15:15.595033 systemd[1]: session-24.scope: Deactivated successfully. May 14 18:15:15.595786 systemd-logind[1574]: Session 24 logged out. Waiting for processes to exit. May 14 18:15:15.596794 systemd-logind[1574]: Removed session 24.